00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 982 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3649 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.096 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.097 The recommended git tool is: git 00:00:00.097 using credential 00000000-0000-0000-0000-000000000002 00:00:00.100 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.136 Fetching changes from the remote Git repository 00:00:00.138 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.200 Using shallow fetch with depth 1 00:00:00.200 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.200 > git --version # timeout=10 00:00:00.229 > git --version # 'git version 2.39.2' 00:00:00.229 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.259 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.259 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.320 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.332 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.345 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.345 > git config core.sparsecheckout # timeout=10 00:00:06.355 > git read-tree -mu HEAD # timeout=10 00:00:06.371 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.395 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.395 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.516 [Pipeline] Start of Pipeline 00:00:06.529 [Pipeline] library 00:00:06.531 Loading library shm_lib@master 00:00:06.531 Library shm_lib@master is cached. Copying from home. 00:00:06.544 [Pipeline] node 00:00:06.553 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:06.555 [Pipeline] { 00:00:06.567 [Pipeline] catchError 00:00:06.569 [Pipeline] { 00:00:06.585 [Pipeline] wrap 00:00:06.595 [Pipeline] { 00:00:06.604 [Pipeline] stage 00:00:06.606 [Pipeline] { (Prologue) 00:00:06.623 [Pipeline] echo 00:00:06.624 Node: VM-host-SM9 00:00:06.632 [Pipeline] cleanWs 00:00:06.641 [WS-CLEANUP] Deleting project workspace... 00:00:06.641 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.647 [WS-CLEANUP] done 00:00:06.839 [Pipeline] setCustomBuildProperty 00:00:06.914 [Pipeline] httpRequest 00:00:07.267 [Pipeline] echo 00:00:07.268 Sorcerer 10.211.164.20 is alive 00:00:07.277 [Pipeline] retry 00:00:07.279 [Pipeline] { 00:00:07.292 [Pipeline] httpRequest 00:00:07.296 HttpMethod: GET 00:00:07.297 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.298 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.308 Response Code: HTTP/1.1 200 OK 00:00:07.309 Success: Status code 200 is in the accepted range: 200,404 00:00:07.309 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.734 [Pipeline] } 00:00:08.751 [Pipeline] // retry 00:00:08.758 [Pipeline] sh 00:00:09.038 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.052 [Pipeline] httpRequest 00:00:09.467 [Pipeline] echo 00:00:09.468 Sorcerer 10.211.164.20 is alive 00:00:09.478 [Pipeline] retry 00:00:09.480 [Pipeline] { 00:00:09.493 [Pipeline] httpRequest 00:00:09.498 HttpMethod: GET 00:00:09.498 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:09.499 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:09.522 Response Code: HTTP/1.1 200 OK 00:00:09.523 Success: Status code 200 is in the accepted range: 200,404 00:00:09.523 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:36.355 [Pipeline] } 00:01:36.373 [Pipeline] // retry 00:01:36.381 [Pipeline] sh 00:01:36.662 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:39.962 [Pipeline] sh 00:01:40.245 + git -C spdk log --oneline -n5 00:01:40.245 c13c99a5e test: Various fixes for Fedora40 00:01:40.245 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:40.245 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:40.245 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:40.245 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:40.265 [Pipeline] withCredentials 00:01:40.276 > git --version # timeout=10 00:01:40.290 > git --version # 'git version 2.39.2' 00:01:40.303 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:40.305 [Pipeline] { 00:01:40.316 [Pipeline] retry 00:01:40.318 [Pipeline] { 00:01:40.336 [Pipeline] sh 00:01:40.616 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:40.628 [Pipeline] } 00:01:40.650 [Pipeline] // retry 00:01:40.656 [Pipeline] } 00:01:40.675 [Pipeline] // withCredentials 00:01:40.686 [Pipeline] httpRequest 00:01:41.065 [Pipeline] echo 00:01:41.067 Sorcerer 10.211.164.20 is alive 00:01:41.079 [Pipeline] retry 00:01:41.081 [Pipeline] { 00:01:41.098 [Pipeline] httpRequest 00:01:41.103 HttpMethod: GET 00:01:41.104 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:41.105 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:41.111 Response Code: HTTP/1.1 200 OK 00:01:41.112 Success: Status code 200 is in the accepted range: 200,404 00:01:41.112 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:54.235 [Pipeline] } 00:01:54.252 [Pipeline] // retry 00:01:54.260 [Pipeline] sh 00:01:54.540 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:56.458 [Pipeline] sh 00:01:56.744 + git -C dpdk log --oneline -n5 00:01:56.744 caf0f5d395 version: 22.11.4 00:01:56.744 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:56.744 dc9c799c7d vhost: fix missing spinlock unlock 00:01:56.744 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:56.744 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:56.763 [Pipeline] writeFile 00:01:56.780 [Pipeline] sh 00:01:57.065 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:57.078 [Pipeline] sh 00:01:57.361 + cat autorun-spdk.conf 00:01:57.361 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.361 SPDK_TEST_NVMF=1 00:01:57.361 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.361 SPDK_TEST_URING=1 00:01:57.361 SPDK_TEST_USDT=1 00:01:57.361 SPDK_RUN_UBSAN=1 00:01:57.361 NET_TYPE=virt 00:01:57.361 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:57.361 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:57.361 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:57.368 RUN_NIGHTLY=1 00:01:57.370 [Pipeline] } 00:01:57.386 [Pipeline] // stage 00:01:57.404 [Pipeline] stage 00:01:57.406 [Pipeline] { (Run VM) 00:01:57.421 [Pipeline] sh 00:01:57.706 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:57.706 + echo 'Start stage prepare_nvme.sh' 00:01:57.706 Start stage prepare_nvme.sh 00:01:57.706 + [[ -n 2 ]] 00:01:57.706 + disk_prefix=ex2 00:01:57.706 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:57.706 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:57.706 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:57.706 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.706 ++ SPDK_TEST_NVMF=1 00:01:57.706 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.706 ++ SPDK_TEST_URING=1 00:01:57.706 ++ SPDK_TEST_USDT=1 00:01:57.706 ++ SPDK_RUN_UBSAN=1 00:01:57.706 ++ NET_TYPE=virt 00:01:57.706 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:57.706 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:57.706 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:57.706 ++ RUN_NIGHTLY=1 00:01:57.706 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:57.706 + nvme_files=() 00:01:57.706 + declare -A nvme_files 00:01:57.706 + backend_dir=/var/lib/libvirt/images/backends 00:01:57.706 + nvme_files['nvme.img']=5G 00:01:57.706 + nvme_files['nvme-cmb.img']=5G 00:01:57.706 + nvme_files['nvme-multi0.img']=4G 00:01:57.706 + nvme_files['nvme-multi1.img']=4G 00:01:57.706 + nvme_files['nvme-multi2.img']=4G 00:01:57.706 + nvme_files['nvme-openstack.img']=8G 00:01:57.706 + nvme_files['nvme-zns.img']=5G 00:01:57.706 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:57.706 + (( SPDK_TEST_FTL == 1 )) 00:01:57.706 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:57.706 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:57.706 + for nvme in "${!nvme_files[@]}" 00:01:57.706 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:57.706 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:57.706 + for nvme in "${!nvme_files[@]}" 00:01:57.706 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:57.706 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:57.706 + for nvme in "${!nvme_files[@]}" 00:01:57.706 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:57.706 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:57.706 + for nvme in "${!nvme_files[@]}" 00:01:57.706 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:57.706 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:57.706 + for nvme in "${!nvme_files[@]}" 00:01:57.706 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:57.706 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:57.707 + for nvme in "${!nvme_files[@]}" 00:01:57.707 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:57.707 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:57.707 + for nvme in "${!nvme_files[@]}" 00:01:57.707 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:57.707 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:57.965 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:57.965 + echo 'End stage prepare_nvme.sh' 00:01:57.965 End stage prepare_nvme.sh 00:01:57.976 [Pipeline] sh 00:01:58.258 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:58.258 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora39 00:01:58.258 00:01:58.258 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:58.258 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:58.258 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:58.258 HELP=0 00:01:58.258 DRY_RUN=0 00:01:58.258 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:01:58.258 NVME_DISKS_TYPE=nvme,nvme, 00:01:58.258 NVME_AUTO_CREATE=0 00:01:58.258 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:01:58.258 NVME_CMB=,, 00:01:58.258 NVME_PMR=,, 00:01:58.258 NVME_ZNS=,, 00:01:58.258 NVME_MS=,, 00:01:58.258 NVME_FDP=,, 00:01:58.258 SPDK_VAGRANT_DISTRO=fedora39 00:01:58.258 SPDK_VAGRANT_VMCPU=10 00:01:58.258 SPDK_VAGRANT_VMRAM=12288 00:01:58.258 SPDK_VAGRANT_PROVIDER=libvirt 00:01:58.258 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:58.258 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:58.258 SPDK_OPENSTACK_NETWORK=0 00:01:58.258 VAGRANT_PACKAGE_BOX=0 00:01:58.258 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:58.258 FORCE_DISTRO=true 00:01:58.258 VAGRANT_BOX_VERSION= 00:01:58.258 EXTRA_VAGRANTFILES= 00:01:58.258 NIC_MODEL=e1000 00:01:58.258 00:01:58.258 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:58.258 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:01.564 Bringing machine 'default' up with 'libvirt' provider... 00:02:01.822 ==> default: Creating image (snapshot of base box volume). 00:02:02.081 ==> default: Creating domain with the following settings... 00:02:02.081 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732114352_49c93efeccd54dd7383a 00:02:02.081 ==> default: -- Domain type: kvm 00:02:02.081 ==> default: -- Cpus: 10 00:02:02.081 ==> default: -- Feature: acpi 00:02:02.081 ==> default: -- Feature: apic 00:02:02.081 ==> default: -- Feature: pae 00:02:02.081 ==> default: -- Memory: 12288M 00:02:02.081 ==> default: -- Memory Backing: hugepages: 00:02:02.081 ==> default: -- Management MAC: 00:02:02.081 ==> default: -- Loader: 00:02:02.081 ==> default: -- Nvram: 00:02:02.081 ==> default: -- Base box: spdk/fedora39 00:02:02.081 ==> default: -- Storage pool: default 00:02:02.081 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732114352_49c93efeccd54dd7383a.img (20G) 00:02:02.081 ==> default: -- Volume Cache: default 00:02:02.081 ==> default: -- Kernel: 00:02:02.081 ==> default: -- Initrd: 00:02:02.081 ==> default: -- Graphics Type: vnc 00:02:02.081 ==> default: -- Graphics Port: -1 00:02:02.081 ==> default: -- Graphics IP: 127.0.0.1 00:02:02.081 ==> default: -- Graphics Password: Not defined 00:02:02.081 ==> default: -- Video Type: cirrus 00:02:02.081 ==> default: -- Video VRAM: 9216 00:02:02.081 ==> default: -- Sound Type: 00:02:02.081 ==> default: -- Keymap: en-us 00:02:02.081 ==> default: -- TPM Path: 00:02:02.081 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:02.081 ==> default: -- Command line args: 00:02:02.081 ==> default: -> value=-device, 00:02:02.081 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:02:02.081 ==> default: -> value=-drive, 00:02:02.081 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:02:02.081 ==> default: -> value=-device, 00:02:02.081 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:02.081 ==> default: -> value=-device, 00:02:02.081 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:02:02.082 ==> default: -> value=-drive, 00:02:02.082 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:02.082 ==> default: -> value=-device, 00:02:02.082 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:02.082 ==> default: -> value=-drive, 00:02:02.082 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:02.082 ==> default: -> value=-device, 00:02:02.082 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:02.082 ==> default: -> value=-drive, 00:02:02.082 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:02.082 ==> default: -> value=-device, 00:02:02.082 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:02.082 ==> default: Creating shared folders metadata... 00:02:02.082 ==> default: Starting domain. 00:02:03.460 ==> default: Waiting for domain to get an IP address... 00:02:21.602 ==> default: Waiting for SSH to become available... 00:02:21.602 ==> default: Configuring and enabling network interfaces... 00:02:24.138 default: SSH address: 192.168.121.249:22 00:02:24.138 default: SSH username: vagrant 00:02:24.138 default: SSH auth method: private key 00:02:26.674 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:33.236 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:39.808 ==> default: Mounting SSHFS shared folder... 00:02:40.379 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:40.379 ==> default: Checking Mount.. 00:02:41.754 ==> default: Folder Successfully Mounted! 00:02:41.754 ==> default: Running provisioner: file... 00:02:42.320 default: ~/.gitconfig => .gitconfig 00:02:42.887 00:02:42.887 SUCCESS! 00:02:42.887 00:02:42.887 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:42.887 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:42.887 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:42.887 00:02:42.895 [Pipeline] } 00:02:42.909 [Pipeline] // stage 00:02:42.917 [Pipeline] dir 00:02:42.917 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:42.919 [Pipeline] { 00:02:42.930 [Pipeline] catchError 00:02:42.931 [Pipeline] { 00:02:42.943 [Pipeline] sh 00:02:43.221 + vagrant ssh-config --host vagrant 00:02:43.221 + sed -ne /^Host/,$p 00:02:43.221 + tee ssh_conf 00:02:47.408 Host vagrant 00:02:47.408 HostName 192.168.121.249 00:02:47.408 User vagrant 00:02:47.408 Port 22 00:02:47.408 UserKnownHostsFile /dev/null 00:02:47.408 StrictHostKeyChecking no 00:02:47.408 PasswordAuthentication no 00:02:47.408 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:47.408 IdentitiesOnly yes 00:02:47.408 LogLevel FATAL 00:02:47.408 ForwardAgent yes 00:02:47.408 ForwardX11 yes 00:02:47.408 00:02:47.422 [Pipeline] withEnv 00:02:47.424 [Pipeline] { 00:02:47.439 [Pipeline] sh 00:02:47.721 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:47.721 source /etc/os-release 00:02:47.721 [[ -e /image.version ]] && img=$(< /image.version) 00:02:47.721 # Minimal, systemd-like check. 00:02:47.721 if [[ -e /.dockerenv ]]; then 00:02:47.721 # Clear garbage from the node's name: 00:02:47.721 # agt-er_autotest_547-896 -> autotest_547-896 00:02:47.721 # $HOSTNAME is the actual container id 00:02:47.721 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:47.721 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:47.721 # We can assume this is a mount from a host where container is running, 00:02:47.721 # so fetch its hostname to easily identify the target swarm worker. 00:02:47.721 container="$(< /etc/hostname) ($agent)" 00:02:47.721 else 00:02:47.721 # Fallback 00:02:47.721 container=$agent 00:02:47.721 fi 00:02:47.721 fi 00:02:47.721 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:47.721 00:02:47.991 [Pipeline] } 00:02:48.007 [Pipeline] // withEnv 00:02:48.015 [Pipeline] setCustomBuildProperty 00:02:48.027 [Pipeline] stage 00:02:48.029 [Pipeline] { (Tests) 00:02:48.045 [Pipeline] sh 00:02:48.324 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:48.592 [Pipeline] sh 00:02:48.867 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:49.140 [Pipeline] timeout 00:02:49.140 Timeout set to expire in 1 hr 0 min 00:02:49.142 [Pipeline] { 00:02:49.159 [Pipeline] sh 00:02:49.440 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:50.009 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:50.022 [Pipeline] sh 00:02:50.305 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:50.578 [Pipeline] sh 00:02:50.858 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:51.133 [Pipeline] sh 00:02:51.414 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:51.673 ++ readlink -f spdk_repo 00:02:51.673 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:51.673 + [[ -n /home/vagrant/spdk_repo ]] 00:02:51.673 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:51.673 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:51.673 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:51.673 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:51.673 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:51.673 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:51.673 + cd /home/vagrant/spdk_repo 00:02:51.673 + source /etc/os-release 00:02:51.673 ++ NAME='Fedora Linux' 00:02:51.673 ++ VERSION='39 (Cloud Edition)' 00:02:51.673 ++ ID=fedora 00:02:51.673 ++ VERSION_ID=39 00:02:51.673 ++ VERSION_CODENAME= 00:02:51.673 ++ PLATFORM_ID=platform:f39 00:02:51.673 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:51.673 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:51.673 ++ LOGO=fedora-logo-icon 00:02:51.673 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:51.673 ++ HOME_URL=https://fedoraproject.org/ 00:02:51.673 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:51.673 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:51.673 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:51.673 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:51.673 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:51.673 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:51.673 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:51.673 ++ SUPPORT_END=2024-11-12 00:02:51.673 ++ VARIANT='Cloud Edition' 00:02:51.673 ++ VARIANT_ID=cloud 00:02:51.673 + uname -a 00:02:51.673 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:51.673 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:51.673 Hugepages 00:02:51.673 node hugesize free / total 00:02:51.673 node0 1048576kB 0 / 0 00:02:51.673 node0 2048kB 0 / 0 00:02:51.673 00:02:51.673 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:51.673 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:51.674 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:51.674 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:51.933 + rm -f /tmp/spdk-ld-path 00:02:51.933 + source autorun-spdk.conf 00:02:51.933 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:51.933 ++ SPDK_TEST_NVMF=1 00:02:51.933 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:51.933 ++ SPDK_TEST_URING=1 00:02:51.933 ++ SPDK_TEST_USDT=1 00:02:51.933 ++ SPDK_RUN_UBSAN=1 00:02:51.933 ++ NET_TYPE=virt 00:02:51.933 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:51.933 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:51.933 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:51.933 ++ RUN_NIGHTLY=1 00:02:51.933 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:51.933 + [[ -n '' ]] 00:02:51.933 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:51.933 + for M in /var/spdk/build-*-manifest.txt 00:02:51.933 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:51.933 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:51.933 + for M in /var/spdk/build-*-manifest.txt 00:02:51.933 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:51.933 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:51.933 + for M in /var/spdk/build-*-manifest.txt 00:02:51.933 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:51.933 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:51.933 ++ uname 00:02:51.933 + [[ Linux == \L\i\n\u\x ]] 00:02:51.933 + sudo dmesg -T 00:02:51.933 + sudo dmesg --clear 00:02:51.933 + dmesg_pid=5972 00:02:51.933 + [[ Fedora Linux == FreeBSD ]] 00:02:51.933 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:51.933 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:51.933 + sudo dmesg -Tw 00:02:51.933 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:51.933 + [[ -x /usr/src/fio-static/fio ]] 00:02:51.933 + export FIO_BIN=/usr/src/fio-static/fio 00:02:51.933 + FIO_BIN=/usr/src/fio-static/fio 00:02:51.933 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:51.933 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:51.933 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:51.933 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:51.933 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:51.933 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:51.934 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:51.934 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:51.934 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:51.934 Test configuration: 00:02:51.934 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:51.934 SPDK_TEST_NVMF=1 00:02:51.934 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:51.934 SPDK_TEST_URING=1 00:02:51.934 SPDK_TEST_USDT=1 00:02:51.934 SPDK_RUN_UBSAN=1 00:02:51.934 NET_TYPE=virt 00:02:51.934 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:51.934 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:51.934 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:51.934 RUN_NIGHTLY=1 14:53:22 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:51.934 14:53:22 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:51.934 14:53:22 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:51.934 14:53:22 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:51.934 14:53:22 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:51.934 14:53:22 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:51.934 14:53:22 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:51.934 14:53:22 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:51.934 14:53:22 -- paths/export.sh@5 -- $ export PATH 00:02:51.934 14:53:22 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:51.934 14:53:22 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:51.934 14:53:22 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:51.934 14:53:22 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732114402.XXXXXX 00:02:51.934 14:53:22 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732114402.zkDS4t 00:02:51.934 14:53:22 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:51.934 14:53:22 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:02:51.934 14:53:22 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:51.934 14:53:22 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:51.934 14:53:22 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:51.934 14:53:22 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:51.934 14:53:22 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:51.934 14:53:22 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:51.934 14:53:22 -- common/autotest_common.sh@10 -- $ set +x 00:02:52.194 14:53:22 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:52.194 14:53:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:52.194 14:53:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:52.194 14:53:22 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:52.194 14:53:22 -- spdk/autobuild.sh@16 -- $ date -u 00:02:52.194 Wed Nov 20 02:53:22 PM UTC 2024 00:02:52.194 14:53:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:52.194 LTS-67-gc13c99a5e 00:02:52.194 14:53:22 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:52.194 14:53:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:52.194 14:53:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:52.194 14:53:22 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:52.194 14:53:22 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:52.194 14:53:22 -- common/autotest_common.sh@10 -- $ set +x 00:02:52.194 ************************************ 00:02:52.194 START TEST ubsan 00:02:52.194 ************************************ 00:02:52.194 using ubsan 00:02:52.194 14:53:22 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:52.194 00:02:52.194 real 0m0.000s 00:02:52.194 user 0m0.000s 00:02:52.194 sys 0m0.000s 00:02:52.194 14:53:22 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:52.194 14:53:22 -- common/autotest_common.sh@10 -- $ set +x 00:02:52.194 ************************************ 00:02:52.194 END TEST ubsan 00:02:52.194 ************************************ 00:02:52.194 14:53:22 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:52.194 14:53:22 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:52.194 14:53:22 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:52.194 14:53:22 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:52.194 14:53:22 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:52.194 14:53:22 -- common/autotest_common.sh@10 -- $ set +x 00:02:52.194 ************************************ 00:02:52.194 START TEST build_native_dpdk 00:02:52.194 ************************************ 00:02:52.194 14:53:22 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:02:52.194 14:53:22 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:52.194 14:53:22 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:52.194 14:53:22 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:52.194 14:53:22 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:52.194 14:53:22 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:52.194 14:53:22 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:52.194 14:53:22 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:52.194 14:53:22 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:52.194 14:53:22 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:52.194 14:53:22 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:52.194 14:53:22 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:52.194 14:53:22 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:52.194 14:53:22 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:52.194 14:53:22 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:52.194 14:53:22 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:52.194 14:53:22 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:52.194 14:53:22 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:52.194 14:53:22 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:52.194 14:53:22 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:52.194 14:53:22 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:52.194 caf0f5d395 version: 22.11.4 00:02:52.194 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:52.194 dc9c799c7d vhost: fix missing spinlock unlock 00:02:52.194 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:52.194 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:52.194 14:53:22 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:52.194 14:53:22 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:52.194 14:53:22 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:52.194 14:53:22 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:52.194 14:53:22 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:52.194 14:53:22 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:52.194 14:53:22 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:52.194 14:53:22 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:52.194 14:53:22 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:52.194 14:53:22 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:52.194 14:53:22 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:52.194 14:53:22 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:52.194 14:53:22 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:52.194 14:53:22 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:52.194 14:53:22 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:52.194 14:53:22 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:52.194 14:53:22 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:52.194 14:53:22 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:52.194 14:53:22 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:52.194 14:53:22 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:52.194 14:53:22 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:52.194 14:53:22 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:52.194 14:53:22 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:52.194 14:53:22 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:52.194 14:53:22 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:52.194 14:53:22 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:52.194 14:53:22 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:52.194 14:53:22 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:52.194 14:53:22 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:52.194 14:53:22 -- scripts/common.sh@343 -- $ case "$op" in 00:02:52.194 14:53:22 -- scripts/common.sh@344 -- $ : 1 00:02:52.194 14:53:22 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:52.194 14:53:22 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:52.194 14:53:22 -- scripts/common.sh@364 -- $ decimal 22 00:02:52.194 14:53:22 -- scripts/common.sh@352 -- $ local d=22 00:02:52.194 14:53:22 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:52.194 14:53:22 -- scripts/common.sh@354 -- $ echo 22 00:02:52.194 14:53:22 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:52.194 14:53:22 -- scripts/common.sh@365 -- $ decimal 21 00:02:52.194 14:53:22 -- scripts/common.sh@352 -- $ local d=21 00:02:52.194 14:53:22 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:52.194 14:53:22 -- scripts/common.sh@354 -- $ echo 21 00:02:52.194 14:53:22 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:52.194 14:53:22 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:52.194 14:53:22 -- scripts/common.sh@366 -- $ return 1 00:02:52.194 14:53:22 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:52.194 patching file config/rte_config.h 00:02:52.194 Hunk #1 succeeded at 60 (offset 1 line). 00:02:52.194 14:53:22 -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:52.194 14:53:22 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:52.194 14:53:22 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:52.194 14:53:22 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:52.194 14:53:22 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:52.194 14:53:22 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:52.194 14:53:22 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:52.194 14:53:22 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:52.194 14:53:22 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:52.194 14:53:22 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:52.194 14:53:22 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:52.194 14:53:22 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:52.194 14:53:22 -- scripts/common.sh@343 -- $ case "$op" in 00:02:52.194 14:53:22 -- scripts/common.sh@344 -- $ : 1 00:02:52.194 14:53:22 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:52.194 14:53:22 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:52.194 14:53:22 -- scripts/common.sh@364 -- $ decimal 22 00:02:52.194 14:53:22 -- scripts/common.sh@352 -- $ local d=22 00:02:52.194 14:53:22 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:52.194 14:53:22 -- scripts/common.sh@354 -- $ echo 22 00:02:52.194 14:53:22 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:52.194 14:53:22 -- scripts/common.sh@365 -- $ decimal 24 00:02:52.194 14:53:22 -- scripts/common.sh@352 -- $ local d=24 00:02:52.194 14:53:22 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:52.194 14:53:22 -- scripts/common.sh@354 -- $ echo 24 00:02:52.194 14:53:22 -- scripts/common.sh@365 -- $ ver2[v]=24 00:02:52.194 14:53:22 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:52.194 14:53:22 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:02:52.194 14:53:22 -- scripts/common.sh@367 -- $ return 0 00:02:52.194 14:53:22 -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:52.194 patching file lib/pcapng/rte_pcapng.c 00:02:52.194 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:52.194 14:53:22 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:02:52.194 14:53:22 -- common/autobuild_common.sh@181 -- $ uname -s 00:02:52.194 14:53:22 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:02:52.194 14:53:22 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:52.194 14:53:22 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:57.496 The Meson build system 00:02:57.496 Version: 1.5.0 00:02:57.496 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:57.496 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:57.496 Build type: native build 00:02:57.496 Program cat found: YES (/usr/bin/cat) 00:02:57.496 Project name: DPDK 00:02:57.496 Project version: 22.11.4 00:02:57.496 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:57.496 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:57.496 Host machine cpu family: x86_64 00:02:57.496 Host machine cpu: x86_64 00:02:57.496 Message: ## Building in Developer Mode ## 00:02:57.496 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:57.496 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:57.496 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:57.496 Program objdump found: YES (/usr/bin/objdump) 00:02:57.496 Program python3 found: YES (/usr/bin/python3) 00:02:57.496 Program cat found: YES (/usr/bin/cat) 00:02:57.496 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:57.496 Checking for size of "void *" : 8 00:02:57.496 Checking for size of "void *" : 8 (cached) 00:02:57.496 Library m found: YES 00:02:57.496 Library numa found: YES 00:02:57.496 Has header "numaif.h" : YES 00:02:57.496 Library fdt found: NO 00:02:57.496 Library execinfo found: NO 00:02:57.496 Has header "execinfo.h" : YES 00:02:57.496 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:57.496 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:57.496 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:57.496 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:57.496 Run-time dependency openssl found: YES 3.1.1 00:02:57.496 Run-time dependency libpcap found: YES 1.10.4 00:02:57.496 Has header "pcap.h" with dependency libpcap: YES 00:02:57.496 Compiler for C supports arguments -Wcast-qual: YES 00:02:57.496 Compiler for C supports arguments -Wdeprecated: YES 00:02:57.496 Compiler for C supports arguments -Wformat: YES 00:02:57.496 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:57.496 Compiler for C supports arguments -Wformat-security: NO 00:02:57.496 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:57.496 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:57.496 Compiler for C supports arguments -Wnested-externs: YES 00:02:57.496 Compiler for C supports arguments -Wold-style-definition: YES 00:02:57.496 Compiler for C supports arguments -Wpointer-arith: YES 00:02:57.496 Compiler for C supports arguments -Wsign-compare: YES 00:02:57.496 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:57.496 Compiler for C supports arguments -Wundef: YES 00:02:57.496 Compiler for C supports arguments -Wwrite-strings: YES 00:02:57.496 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:57.496 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:57.496 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:57.496 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:57.496 Compiler for C supports arguments -mavx512f: YES 00:02:57.496 Checking if "AVX512 checking" compiles: YES 00:02:57.496 Fetching value of define "__SSE4_2__" : 1 00:02:57.496 Fetching value of define "__AES__" : 1 00:02:57.496 Fetching value of define "__AVX__" : 1 00:02:57.496 Fetching value of define "__AVX2__" : 1 00:02:57.496 Fetching value of define "__AVX512BW__" : (undefined) 00:02:57.496 Fetching value of define "__AVX512CD__" : (undefined) 00:02:57.496 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:57.496 Fetching value of define "__AVX512F__" : (undefined) 00:02:57.496 Fetching value of define "__AVX512VL__" : (undefined) 00:02:57.496 Fetching value of define "__PCLMUL__" : 1 00:02:57.496 Fetching value of define "__RDRND__" : 1 00:02:57.496 Fetching value of define "__RDSEED__" : 1 00:02:57.496 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:57.496 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:57.496 Message: lib/kvargs: Defining dependency "kvargs" 00:02:57.496 Message: lib/telemetry: Defining dependency "telemetry" 00:02:57.496 Checking for function "getentropy" : YES 00:02:57.496 Message: lib/eal: Defining dependency "eal" 00:02:57.496 Message: lib/ring: Defining dependency "ring" 00:02:57.496 Message: lib/rcu: Defining dependency "rcu" 00:02:57.496 Message: lib/mempool: Defining dependency "mempool" 00:02:57.496 Message: lib/mbuf: Defining dependency "mbuf" 00:02:57.496 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:57.496 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:57.496 Compiler for C supports arguments -mpclmul: YES 00:02:57.496 Compiler for C supports arguments -maes: YES 00:02:57.496 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:57.496 Compiler for C supports arguments -mavx512bw: YES 00:02:57.496 Compiler for C supports arguments -mavx512dq: YES 00:02:57.496 Compiler for C supports arguments -mavx512vl: YES 00:02:57.496 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:57.496 Compiler for C supports arguments -mavx2: YES 00:02:57.496 Compiler for C supports arguments -mavx: YES 00:02:57.496 Message: lib/net: Defining dependency "net" 00:02:57.496 Message: lib/meter: Defining dependency "meter" 00:02:57.496 Message: lib/ethdev: Defining dependency "ethdev" 00:02:57.496 Message: lib/pci: Defining dependency "pci" 00:02:57.496 Message: lib/cmdline: Defining dependency "cmdline" 00:02:57.496 Message: lib/metrics: Defining dependency "metrics" 00:02:57.496 Message: lib/hash: Defining dependency "hash" 00:02:57.496 Message: lib/timer: Defining dependency "timer" 00:02:57.496 Fetching value of define "__AVX2__" : 1 (cached) 00:02:57.496 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:57.496 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:57.496 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:57.496 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:57.496 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:57.496 Message: lib/acl: Defining dependency "acl" 00:02:57.496 Message: lib/bbdev: Defining dependency "bbdev" 00:02:57.496 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:57.496 Run-time dependency libelf found: YES 0.191 00:02:57.496 Message: lib/bpf: Defining dependency "bpf" 00:02:57.496 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:57.496 Message: lib/compressdev: Defining dependency "compressdev" 00:02:57.496 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:57.496 Message: lib/distributor: Defining dependency "distributor" 00:02:57.496 Message: lib/efd: Defining dependency "efd" 00:02:57.496 Message: lib/eventdev: Defining dependency "eventdev" 00:02:57.496 Message: lib/gpudev: Defining dependency "gpudev" 00:02:57.496 Message: lib/gro: Defining dependency "gro" 00:02:57.496 Message: lib/gso: Defining dependency "gso" 00:02:57.496 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:57.496 Message: lib/jobstats: Defining dependency "jobstats" 00:02:57.496 Message: lib/latencystats: Defining dependency "latencystats" 00:02:57.496 Message: lib/lpm: Defining dependency "lpm" 00:02:57.496 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:57.496 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:57.496 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:57.496 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:57.496 Message: lib/member: Defining dependency "member" 00:02:57.496 Message: lib/pcapng: Defining dependency "pcapng" 00:02:57.496 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:57.496 Message: lib/power: Defining dependency "power" 00:02:57.496 Message: lib/rawdev: Defining dependency "rawdev" 00:02:57.496 Message: lib/regexdev: Defining dependency "regexdev" 00:02:57.496 Message: lib/dmadev: Defining dependency "dmadev" 00:02:57.496 Message: lib/rib: Defining dependency "rib" 00:02:57.496 Message: lib/reorder: Defining dependency "reorder" 00:02:57.496 Message: lib/sched: Defining dependency "sched" 00:02:57.497 Message: lib/security: Defining dependency "security" 00:02:57.497 Message: lib/stack: Defining dependency "stack" 00:02:57.497 Has header "linux/userfaultfd.h" : YES 00:02:57.497 Message: lib/vhost: Defining dependency "vhost" 00:02:57.497 Message: lib/ipsec: Defining dependency "ipsec" 00:02:57.497 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:57.497 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:57.497 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:57.497 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:57.497 Message: lib/fib: Defining dependency "fib" 00:02:57.497 Message: lib/port: Defining dependency "port" 00:02:57.497 Message: lib/pdump: Defining dependency "pdump" 00:02:57.497 Message: lib/table: Defining dependency "table" 00:02:57.497 Message: lib/pipeline: Defining dependency "pipeline" 00:02:57.497 Message: lib/graph: Defining dependency "graph" 00:02:57.497 Message: lib/node: Defining dependency "node" 00:02:57.497 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:57.497 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:57.497 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:57.497 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:57.497 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:57.497 Compiler for C supports arguments -Wno-unused-value: YES 00:02:57.497 Compiler for C supports arguments -Wno-format: YES 00:02:57.497 Compiler for C supports arguments -Wno-format-security: YES 00:02:57.497 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:59.399 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:59.399 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:59.399 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:59.399 Fetching value of define "__AVX2__" : 1 (cached) 00:02:59.399 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:59.399 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:59.399 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:59.399 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:59.399 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:59.399 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:59.399 Configuring doxy-api.conf using configuration 00:02:59.399 Program sphinx-build found: NO 00:02:59.399 Configuring rte_build_config.h using configuration 00:02:59.399 Message: 00:02:59.399 ================= 00:02:59.399 Applications Enabled 00:02:59.399 ================= 00:02:59.399 00:02:59.399 apps: 00:02:59.399 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:59.399 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:59.399 test-security-perf, 00:02:59.399 00:02:59.399 Message: 00:02:59.399 ================= 00:02:59.399 Libraries Enabled 00:02:59.399 ================= 00:02:59.399 00:02:59.399 libs: 00:02:59.399 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:59.399 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:59.399 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:59.399 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:59.399 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:59.399 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:59.399 table, pipeline, graph, node, 00:02:59.399 00:02:59.399 Message: 00:02:59.399 =============== 00:02:59.399 Drivers Enabled 00:02:59.399 =============== 00:02:59.399 00:02:59.399 common: 00:02:59.399 00:02:59.399 bus: 00:02:59.399 pci, vdev, 00:02:59.399 mempool: 00:02:59.399 ring, 00:02:59.399 dma: 00:02:59.399 00:02:59.399 net: 00:02:59.399 i40e, 00:02:59.399 raw: 00:02:59.399 00:02:59.399 crypto: 00:02:59.399 00:02:59.399 compress: 00:02:59.399 00:02:59.399 regex: 00:02:59.399 00:02:59.399 vdpa: 00:02:59.399 00:02:59.399 event: 00:02:59.399 00:02:59.400 baseband: 00:02:59.400 00:02:59.400 gpu: 00:02:59.400 00:02:59.400 00:02:59.400 Message: 00:02:59.400 ================= 00:02:59.400 Content Skipped 00:02:59.400 ================= 00:02:59.400 00:02:59.400 apps: 00:02:59.400 00:02:59.400 libs: 00:02:59.400 kni: explicitly disabled via build config (deprecated lib) 00:02:59.400 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:59.400 00:02:59.400 drivers: 00:02:59.400 common/cpt: not in enabled drivers build config 00:02:59.400 common/dpaax: not in enabled drivers build config 00:02:59.400 common/iavf: not in enabled drivers build config 00:02:59.400 common/idpf: not in enabled drivers build config 00:02:59.400 common/mvep: not in enabled drivers build config 00:02:59.400 common/octeontx: not in enabled drivers build config 00:02:59.400 bus/auxiliary: not in enabled drivers build config 00:02:59.400 bus/dpaa: not in enabled drivers build config 00:02:59.400 bus/fslmc: not in enabled drivers build config 00:02:59.400 bus/ifpga: not in enabled drivers build config 00:02:59.400 bus/vmbus: not in enabled drivers build config 00:02:59.400 common/cnxk: not in enabled drivers build config 00:02:59.400 common/mlx5: not in enabled drivers build config 00:02:59.400 common/qat: not in enabled drivers build config 00:02:59.400 common/sfc_efx: not in enabled drivers build config 00:02:59.400 mempool/bucket: not in enabled drivers build config 00:02:59.400 mempool/cnxk: not in enabled drivers build config 00:02:59.400 mempool/dpaa: not in enabled drivers build config 00:02:59.400 mempool/dpaa2: not in enabled drivers build config 00:02:59.400 mempool/octeontx: not in enabled drivers build config 00:02:59.400 mempool/stack: not in enabled drivers build config 00:02:59.400 dma/cnxk: not in enabled drivers build config 00:02:59.400 dma/dpaa: not in enabled drivers build config 00:02:59.400 dma/dpaa2: not in enabled drivers build config 00:02:59.400 dma/hisilicon: not in enabled drivers build config 00:02:59.400 dma/idxd: not in enabled drivers build config 00:02:59.400 dma/ioat: not in enabled drivers build config 00:02:59.400 dma/skeleton: not in enabled drivers build config 00:02:59.400 net/af_packet: not in enabled drivers build config 00:02:59.400 net/af_xdp: not in enabled drivers build config 00:02:59.400 net/ark: not in enabled drivers build config 00:02:59.400 net/atlantic: not in enabled drivers build config 00:02:59.400 net/avp: not in enabled drivers build config 00:02:59.400 net/axgbe: not in enabled drivers build config 00:02:59.400 net/bnx2x: not in enabled drivers build config 00:02:59.400 net/bnxt: not in enabled drivers build config 00:02:59.400 net/bonding: not in enabled drivers build config 00:02:59.400 net/cnxk: not in enabled drivers build config 00:02:59.400 net/cxgbe: not in enabled drivers build config 00:02:59.400 net/dpaa: not in enabled drivers build config 00:02:59.400 net/dpaa2: not in enabled drivers build config 00:02:59.400 net/e1000: not in enabled drivers build config 00:02:59.400 net/ena: not in enabled drivers build config 00:02:59.400 net/enetc: not in enabled drivers build config 00:02:59.400 net/enetfec: not in enabled drivers build config 00:02:59.400 net/enic: not in enabled drivers build config 00:02:59.400 net/failsafe: not in enabled drivers build config 00:02:59.400 net/fm10k: not in enabled drivers build config 00:02:59.400 net/gve: not in enabled drivers build config 00:02:59.400 net/hinic: not in enabled drivers build config 00:02:59.400 net/hns3: not in enabled drivers build config 00:02:59.400 net/iavf: not in enabled drivers build config 00:02:59.400 net/ice: not in enabled drivers build config 00:02:59.400 net/idpf: not in enabled drivers build config 00:02:59.400 net/igc: not in enabled drivers build config 00:02:59.400 net/ionic: not in enabled drivers build config 00:02:59.400 net/ipn3ke: not in enabled drivers build config 00:02:59.400 net/ixgbe: not in enabled drivers build config 00:02:59.400 net/kni: not in enabled drivers build config 00:02:59.400 net/liquidio: not in enabled drivers build config 00:02:59.400 net/mana: not in enabled drivers build config 00:02:59.400 net/memif: not in enabled drivers build config 00:02:59.400 net/mlx4: not in enabled drivers build config 00:02:59.400 net/mlx5: not in enabled drivers build config 00:02:59.400 net/mvneta: not in enabled drivers build config 00:02:59.400 net/mvpp2: not in enabled drivers build config 00:02:59.400 net/netvsc: not in enabled drivers build config 00:02:59.400 net/nfb: not in enabled drivers build config 00:02:59.400 net/nfp: not in enabled drivers build config 00:02:59.400 net/ngbe: not in enabled drivers build config 00:02:59.400 net/null: not in enabled drivers build config 00:02:59.400 net/octeontx: not in enabled drivers build config 00:02:59.400 net/octeon_ep: not in enabled drivers build config 00:02:59.400 net/pcap: not in enabled drivers build config 00:02:59.400 net/pfe: not in enabled drivers build config 00:02:59.400 net/qede: not in enabled drivers build config 00:02:59.400 net/ring: not in enabled drivers build config 00:02:59.400 net/sfc: not in enabled drivers build config 00:02:59.400 net/softnic: not in enabled drivers build config 00:02:59.400 net/tap: not in enabled drivers build config 00:02:59.400 net/thunderx: not in enabled drivers build config 00:02:59.400 net/txgbe: not in enabled drivers build config 00:02:59.400 net/vdev_netvsc: not in enabled drivers build config 00:02:59.400 net/vhost: not in enabled drivers build config 00:02:59.400 net/virtio: not in enabled drivers build config 00:02:59.400 net/vmxnet3: not in enabled drivers build config 00:02:59.400 raw/cnxk_bphy: not in enabled drivers build config 00:02:59.400 raw/cnxk_gpio: not in enabled drivers build config 00:02:59.400 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:59.400 raw/ifpga: not in enabled drivers build config 00:02:59.400 raw/ntb: not in enabled drivers build config 00:02:59.400 raw/skeleton: not in enabled drivers build config 00:02:59.400 crypto/armv8: not in enabled drivers build config 00:02:59.400 crypto/bcmfs: not in enabled drivers build config 00:02:59.400 crypto/caam_jr: not in enabled drivers build config 00:02:59.400 crypto/ccp: not in enabled drivers build config 00:02:59.400 crypto/cnxk: not in enabled drivers build config 00:02:59.400 crypto/dpaa_sec: not in enabled drivers build config 00:02:59.400 crypto/dpaa2_sec: not in enabled drivers build config 00:02:59.400 crypto/ipsec_mb: not in enabled drivers build config 00:02:59.400 crypto/mlx5: not in enabled drivers build config 00:02:59.400 crypto/mvsam: not in enabled drivers build config 00:02:59.400 crypto/nitrox: not in enabled drivers build config 00:02:59.400 crypto/null: not in enabled drivers build config 00:02:59.400 crypto/octeontx: not in enabled drivers build config 00:02:59.400 crypto/openssl: not in enabled drivers build config 00:02:59.400 crypto/scheduler: not in enabled drivers build config 00:02:59.400 crypto/uadk: not in enabled drivers build config 00:02:59.400 crypto/virtio: not in enabled drivers build config 00:02:59.400 compress/isal: not in enabled drivers build config 00:02:59.400 compress/mlx5: not in enabled drivers build config 00:02:59.400 compress/octeontx: not in enabled drivers build config 00:02:59.400 compress/zlib: not in enabled drivers build config 00:02:59.400 regex/mlx5: not in enabled drivers build config 00:02:59.400 regex/cn9k: not in enabled drivers build config 00:02:59.400 vdpa/ifc: not in enabled drivers build config 00:02:59.400 vdpa/mlx5: not in enabled drivers build config 00:02:59.400 vdpa/sfc: not in enabled drivers build config 00:02:59.400 event/cnxk: not in enabled drivers build config 00:02:59.400 event/dlb2: not in enabled drivers build config 00:02:59.400 event/dpaa: not in enabled drivers build config 00:02:59.400 event/dpaa2: not in enabled drivers build config 00:02:59.400 event/dsw: not in enabled drivers build config 00:02:59.400 event/opdl: not in enabled drivers build config 00:02:59.400 event/skeleton: not in enabled drivers build config 00:02:59.400 event/sw: not in enabled drivers build config 00:02:59.400 event/octeontx: not in enabled drivers build config 00:02:59.400 baseband/acc: not in enabled drivers build config 00:02:59.400 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:59.400 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:59.400 baseband/la12xx: not in enabled drivers build config 00:02:59.400 baseband/null: not in enabled drivers build config 00:02:59.400 baseband/turbo_sw: not in enabled drivers build config 00:02:59.400 gpu/cuda: not in enabled drivers build config 00:02:59.400 00:02:59.400 00:02:59.400 Build targets in project: 314 00:02:59.400 00:02:59.400 DPDK 22.11.4 00:02:59.400 00:02:59.400 User defined options 00:02:59.400 libdir : lib 00:02:59.400 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:59.400 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:59.400 c_link_args : 00:02:59.400 enable_docs : false 00:02:59.400 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:59.400 enable_kmods : false 00:02:59.400 machine : native 00:02:59.400 tests : false 00:02:59.400 00:02:59.400 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:59.400 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:59.400 14:53:30 -- common/autobuild_common.sh@189 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:59.400 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:59.658 [1/743] Generating lib/rte_telemetry_def with a custom command 00:02:59.658 [2/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:59.658 [3/743] Generating lib/rte_kvargs_def with a custom command 00:02:59.658 [4/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:59.658 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:59.658 [6/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:59.658 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:59.658 [8/743] Linking static target lib/librte_kvargs.a 00:02:59.658 [9/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:59.658 [10/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:59.658 [11/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:59.658 [12/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:59.658 [13/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:59.658 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:59.915 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:59.915 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:59.915 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:59.915 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:59.915 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:59.915 [20/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.915 [21/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:59.915 [22/743] Linking target lib/librte_kvargs.so.23.0 00:02:59.915 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:59.915 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:59.915 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:00.173 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:00.173 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:00.173 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:00.173 [29/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:00.173 [30/743] Linking static target lib/librte_telemetry.a 00:03:00.173 [31/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:00.173 [32/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:00.173 [33/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:00.173 [34/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:00.173 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:00.173 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:00.431 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:00.431 [38/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:03:00.431 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:00.431 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:00.431 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:00.431 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:00.431 [43/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.431 [44/743] Linking target lib/librte_telemetry.so.23.0 00:03:00.688 [45/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:00.688 [46/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:00.688 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:00.688 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:00.688 [49/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:03:00.688 [50/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:00.688 [51/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:00.688 [52/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:00.688 [53/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:00.688 [54/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:00.688 [55/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:00.688 [56/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:00.688 [57/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:00.946 [58/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:00.946 [59/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:00.946 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:00.946 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:00.946 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:00.946 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:00.946 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:00.946 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:03:00.946 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:00.946 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:00.946 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:00.946 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:01.204 [70/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:01.204 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:01.204 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:01.204 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:01.204 [74/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:01.204 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:01.204 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:01.204 [77/743] Generating lib/rte_eal_def with a custom command 00:03:01.204 [78/743] Generating lib/rte_eal_mingw with a custom command 00:03:01.204 [79/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:01.204 [80/743] Generating lib/rte_ring_def with a custom command 00:03:01.204 [81/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:01.204 [82/743] Generating lib/rte_ring_mingw with a custom command 00:03:01.204 [83/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:01.204 [84/743] Generating lib/rte_rcu_def with a custom command 00:03:01.204 [85/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:01.204 [86/743] Generating lib/rte_rcu_mingw with a custom command 00:03:01.463 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:01.463 [88/743] Linking static target lib/librte_ring.a 00:03:01.463 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:01.463 [90/743] Generating lib/rte_mempool_def with a custom command 00:03:01.463 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:03:01.463 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:01.463 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:01.722 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.722 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:01.722 [96/743] Linking static target lib/librte_eal.a 00:03:01.980 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:01.980 [98/743] Generating lib/rte_mbuf_def with a custom command 00:03:01.981 [99/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:01.981 [100/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:01.981 [101/743] Generating lib/rte_mbuf_mingw with a custom command 00:03:01.981 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:01.981 [103/743] Linking static target lib/librte_rcu.a 00:03:01.981 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:02.240 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:02.240 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:02.240 [107/743] Linking static target lib/librte_mempool.a 00:03:02.240 [108/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.498 [109/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:02.498 [110/743] Generating lib/rte_net_def with a custom command 00:03:02.498 [111/743] Generating lib/rte_net_mingw with a custom command 00:03:02.499 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:02.499 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:02.499 [114/743] Generating lib/rte_meter_def with a custom command 00:03:02.499 [115/743] Generating lib/rte_meter_mingw with a custom command 00:03:02.499 [116/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:02.499 [117/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:02.499 [118/743] Linking static target lib/librte_meter.a 00:03:02.757 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:02.757 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:02.757 [121/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.757 [122/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:03.016 [123/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:03.016 [124/743] Linking static target lib/librte_mbuf.a 00:03:03.016 [125/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:03.016 [126/743] Linking static target lib/librte_net.a 00:03:03.016 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.275 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.275 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:03.275 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:03.534 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:03.534 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:03.534 [133/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.534 [134/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:03.793 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:04.051 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:04.052 [137/743] Generating lib/rte_ethdev_def with a custom command 00:03:04.052 [138/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:04.052 [139/743] Generating lib/rte_ethdev_mingw with a custom command 00:03:04.052 [140/743] Generating lib/rte_pci_def with a custom command 00:03:04.310 [141/743] Generating lib/rte_pci_mingw with a custom command 00:03:04.310 [142/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:04.310 [143/743] Linking static target lib/librte_pci.a 00:03:04.310 [144/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:04.310 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:04.310 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:04.310 [147/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:04.310 [148/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:04.310 [149/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.310 [150/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:04.310 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:04.570 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:04.570 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:04.570 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:04.570 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:04.570 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:04.570 [157/743] Generating lib/rte_cmdline_def with a custom command 00:03:04.570 [158/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:04.570 [159/743] Generating lib/rte_cmdline_mingw with a custom command 00:03:04.570 [160/743] Generating lib/rte_metrics_def with a custom command 00:03:04.570 [161/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:04.570 [162/743] Generating lib/rte_metrics_mingw with a custom command 00:03:04.829 [163/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:04.829 [164/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:04.829 [165/743] Generating lib/rte_hash_def with a custom command 00:03:04.829 [166/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:04.829 [167/743] Generating lib/rte_hash_mingw with a custom command 00:03:04.829 [168/743] Generating lib/rte_timer_def with a custom command 00:03:04.829 [169/743] Generating lib/rte_timer_mingw with a custom command 00:03:04.829 [170/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:04.829 [171/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:04.829 [172/743] Linking static target lib/librte_cmdline.a 00:03:04.829 [173/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:05.396 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:05.396 [175/743] Linking static target lib/librte_metrics.a 00:03:05.396 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:05.396 [177/743] Linking static target lib/librte_timer.a 00:03:05.654 [178/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.654 [179/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.912 [180/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:05.912 [181/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:05.912 [182/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.912 [183/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:05.912 [184/743] Linking static target lib/librte_ethdev.a 00:03:06.479 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:06.479 [186/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:06.479 [187/743] Generating lib/rte_acl_def with a custom command 00:03:06.479 [188/743] Generating lib/rte_acl_mingw with a custom command 00:03:06.479 [189/743] Generating lib/rte_bbdev_def with a custom command 00:03:06.479 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:03:06.479 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:06.479 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:03:06.479 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:03:06.737 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:07.303 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:07.304 [196/743] Linking static target lib/librte_bitratestats.a 00:03:07.304 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:07.304 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.304 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:07.304 [200/743] Linking static target lib/librte_bbdev.a 00:03:07.561 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:07.818 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:07.818 [203/743] Linking static target lib/librte_hash.a 00:03:07.818 [204/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:08.076 [205/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.076 [206/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:08.076 [207/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:08.076 [208/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:03:08.076 [209/743] Linking static target lib/acl/libavx512_tmp.a 00:03:08.643 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.643 [211/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:08.643 [212/743] Generating lib/rte_bpf_def with a custom command 00:03:08.643 [213/743] Generating lib/rte_bpf_mingw with a custom command 00:03:08.643 [214/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:08.643 [215/743] Generating lib/rte_cfgfile_def with a custom command 00:03:08.643 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:03:08.643 [217/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:03:08.643 [218/743] Linking static target lib/librte_acl.a 00:03:08.901 [219/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:08.901 [220/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:08.901 [221/743] Linking static target lib/librte_cfgfile.a 00:03:08.901 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:08.901 [223/743] Generating lib/rte_compressdev_def with a custom command 00:03:08.901 [224/743] Generating lib/rte_compressdev_mingw with a custom command 00:03:09.159 [225/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.159 [226/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.159 [227/743] Linking target lib/librte_eal.so.23.0 00:03:09.159 [228/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.159 [229/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:09.159 [230/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:09.159 [231/743] Generating lib/rte_cryptodev_def with a custom command 00:03:09.159 [232/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:03:09.159 [233/743] Generating lib/rte_cryptodev_mingw with a custom command 00:03:09.417 [234/743] Linking target lib/librte_ring.so.23.0 00:03:09.417 [235/743] Linking target lib/librte_meter.so.23.0 00:03:09.417 [236/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:09.417 [237/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:09.417 [238/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:03:09.417 [239/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:03:09.417 [240/743] Linking target lib/librte_pci.so.23.0 00:03:09.417 [241/743] Linking target lib/librte_timer.so.23.0 00:03:09.417 [242/743] Linking target lib/librte_rcu.so.23.0 00:03:09.417 [243/743] Linking target lib/librte_mempool.so.23.0 00:03:09.676 [244/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:09.676 [245/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:03:09.676 [246/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:03:09.676 [247/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:03:09.676 [248/743] Linking static target lib/librte_bpf.a 00:03:09.676 [249/743] Linking static target lib/librte_compressdev.a 00:03:09.676 [250/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:09.676 [251/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:03:09.676 [252/743] Linking target lib/librte_acl.so.23.0 00:03:09.676 [253/743] Linking target lib/librte_cfgfile.so.23.0 00:03:09.676 [254/743] Linking target lib/librte_mbuf.so.23.0 00:03:09.676 [255/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:03:09.676 [256/743] Generating lib/rte_distributor_def with a custom command 00:03:09.676 [257/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:03:09.933 [258/743] Generating lib/rte_distributor_mingw with a custom command 00:03:09.933 [259/743] Linking target lib/librte_net.so.23.0 00:03:09.933 [260/743] Linking target lib/librte_bbdev.so.23.0 00:03:09.933 [261/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.933 [262/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:09.933 [263/743] Generating lib/rte_efd_def with a custom command 00:03:09.933 [264/743] Generating lib/rte_efd_mingw with a custom command 00:03:09.933 [265/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:03:09.933 [266/743] Linking target lib/librte_cmdline.so.23.0 00:03:09.933 [267/743] Linking target lib/librte_hash.so.23.0 00:03:10.190 [268/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:03:10.190 [269/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:10.448 [270/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:10.448 [271/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:10.448 [272/743] Linking static target lib/librte_distributor.a 00:03:10.448 [273/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.448 [274/743] Linking target lib/librte_compressdev.so.23.0 00:03:10.707 [275/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:10.707 [276/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.707 [277/743] Linking target lib/librte_distributor.so.23.0 00:03:10.707 [278/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.707 [279/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:10.707 [280/743] Generating lib/rte_eventdev_def with a custom command 00:03:10.707 [281/743] Linking target lib/librte_ethdev.so.23.0 00:03:10.707 [282/743] Generating lib/rte_eventdev_mingw with a custom command 00:03:10.966 [283/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:03:10.966 [284/743] Linking target lib/librte_metrics.so.23.0 00:03:10.966 [285/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:03:11.224 [286/743] Linking target lib/librte_bitratestats.so.23.0 00:03:11.224 [287/743] Linking target lib/librte_bpf.so.23.0 00:03:11.224 [288/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:11.224 [289/743] Generating lib/rte_gpudev_def with a custom command 00:03:11.224 [290/743] Generating lib/rte_gpudev_mingw with a custom command 00:03:11.224 [291/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:03:11.539 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:11.539 [293/743] Linking static target lib/librte_efd.a 00:03:11.812 [294/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:11.812 [295/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.812 [296/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:11.812 [297/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:11.812 [298/743] Linking static target lib/librte_gpudev.a 00:03:11.812 [299/743] Linking static target lib/librte_cryptodev.a 00:03:11.812 [300/743] Linking target lib/librte_efd.so.23.0 00:03:11.812 [301/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:12.070 [302/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:12.070 [303/743] Generating lib/rte_gro_def with a custom command 00:03:12.070 [304/743] Generating lib/rte_gro_mingw with a custom command 00:03:12.070 [305/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:12.070 [306/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:12.070 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:12.636 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:12.636 [309/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.636 [310/743] Linking target lib/librte_gpudev.so.23.0 00:03:12.636 [311/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:12.636 [312/743] Generating lib/rte_gso_def with a custom command 00:03:12.636 [313/743] Generating lib/rte_gso_mingw with a custom command 00:03:12.636 [314/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:12.636 [315/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:12.893 [316/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:12.893 [317/743] Linking static target lib/librte_gro.a 00:03:12.893 [318/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:12.893 [319/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:12.893 [320/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.151 [321/743] Linking target lib/librte_gro.so.23.0 00:03:13.151 [322/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:13.151 [323/743] Generating lib/rte_ip_frag_def with a custom command 00:03:13.151 [324/743] Generating lib/rte_ip_frag_mingw with a custom command 00:03:13.151 [325/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:13.151 [326/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:13.151 [327/743] Linking static target lib/librte_eventdev.a 00:03:13.151 [328/743] Linking static target lib/librte_gso.a 00:03:13.409 [329/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:13.409 [330/743] Linking static target lib/librte_jobstats.a 00:03:13.409 [331/743] Generating lib/rte_jobstats_def with a custom command 00:03:13.409 [332/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:13.409 [333/743] Generating lib/rte_jobstats_mingw with a custom command 00:03:13.409 [334/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.409 [335/743] Linking target lib/librte_gso.so.23.0 00:03:13.409 [336/743] Generating lib/rte_latencystats_def with a custom command 00:03:13.668 [337/743] Generating lib/rte_latencystats_mingw with a custom command 00:03:13.668 [338/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:13.668 [339/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:13.668 [340/743] Generating lib/rte_lpm_def with a custom command 00:03:13.668 [341/743] Generating lib/rte_lpm_mingw with a custom command 00:03:13.668 [342/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:13.668 [343/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.668 [344/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.668 [345/743] Linking target lib/librte_jobstats.so.23.0 00:03:13.668 [346/743] Linking target lib/librte_cryptodev.so.23.0 00:03:13.926 [347/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:13.926 [348/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:13.926 [349/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:03:13.926 [350/743] Linking static target lib/librte_ip_frag.a 00:03:14.184 [351/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:14.184 [352/743] Linking static target lib/librte_latencystats.a 00:03:14.184 [353/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.184 [354/743] Linking target lib/librte_ip_frag.so.23.0 00:03:14.443 [355/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:14.443 [356/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:14.443 [357/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:14.443 [358/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:03:14.443 [359/743] Generating lib/rte_member_def with a custom command 00:03:14.443 [360/743] Generating lib/rte_member_mingw with a custom command 00:03:14.443 [361/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.443 [362/743] Generating lib/rte_pcapng_def with a custom command 00:03:14.443 [363/743] Generating lib/rte_pcapng_mingw with a custom command 00:03:14.443 [364/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:14.443 [365/743] Linking target lib/librte_latencystats.so.23.0 00:03:14.443 [366/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:14.443 [367/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:14.701 [368/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:14.701 [369/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:14.701 [370/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:14.959 [371/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:03:14.959 [372/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:14.959 [373/743] Linking static target lib/librte_lpm.a 00:03:15.218 [374/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.218 [375/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:15.218 [376/743] Generating lib/rte_power_def with a custom command 00:03:15.218 [377/743] Linking target lib/librte_eventdev.so.23.0 00:03:15.218 [378/743] Generating lib/rte_power_mingw with a custom command 00:03:15.218 [379/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.218 [380/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:15.218 [381/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:15.218 [382/743] Generating lib/rte_rawdev_def with a custom command 00:03:15.218 [383/743] Linking target lib/librte_lpm.so.23.0 00:03:15.218 [384/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:03:15.476 [385/743] Generating lib/rte_rawdev_mingw with a custom command 00:03:15.476 [386/743] Generating lib/rte_regexdev_def with a custom command 00:03:15.476 [387/743] Generating lib/rte_regexdev_mingw with a custom command 00:03:15.476 [388/743] Generating lib/rte_dmadev_def with a custom command 00:03:15.476 [389/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:15.476 [390/743] Linking static target lib/librte_pcapng.a 00:03:15.477 [391/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:03:15.477 [392/743] Generating lib/rte_dmadev_mingw with a custom command 00:03:15.477 [393/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:03:15.477 [394/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:15.477 [395/743] Generating lib/rte_rib_def with a custom command 00:03:15.477 [396/743] Generating lib/rte_rib_mingw with a custom command 00:03:15.477 [397/743] Generating lib/rte_reorder_def with a custom command 00:03:15.477 [398/743] Generating lib/rte_reorder_mingw with a custom command 00:03:15.736 [399/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:15.736 [400/743] Linking static target lib/librte_rawdev.a 00:03:15.736 [401/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:15.736 [402/743] Linking static target lib/librte_power.a 00:03:15.736 [403/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.736 [404/743] Linking target lib/librte_pcapng.so.23.0 00:03:15.736 [405/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:15.736 [406/743] Linking static target lib/librte_dmadev.a 00:03:15.992 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:03:15.992 [408/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:15.992 [409/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.992 [410/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:15.992 [411/743] Linking static target lib/librte_regexdev.a 00:03:16.250 [412/743] Linking target lib/librte_rawdev.so.23.0 00:03:16.250 [413/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:16.250 [414/743] Generating lib/rte_sched_def with a custom command 00:03:16.250 [415/743] Generating lib/rte_sched_mingw with a custom command 00:03:16.250 [416/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:16.250 [417/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:16.250 [418/743] Linking static target lib/librte_member.a 00:03:16.250 [419/743] Generating lib/rte_security_def with a custom command 00:03:16.250 [420/743] Generating lib/rte_security_mingw with a custom command 00:03:16.250 [421/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:16.250 [422/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.250 [423/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:16.250 [424/743] Linking static target lib/librte_reorder.a 00:03:16.250 [425/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:16.508 [426/743] Linking target lib/librte_dmadev.so.23.0 00:03:16.508 [427/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:16.508 [428/743] Generating lib/rte_stack_def with a custom command 00:03:16.508 [429/743] Generating lib/rte_stack_mingw with a custom command 00:03:16.508 [430/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:16.508 [431/743] Linking static target lib/librte_stack.a 00:03:16.508 [432/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:03:16.508 [433/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.508 [434/743] Linking target lib/librte_member.so.23.0 00:03:16.508 [435/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:16.766 [436/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.766 [437/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.766 [438/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:16.766 [439/743] Linking static target lib/librte_rib.a 00:03:16.766 [440/743] Linking target lib/librte_reorder.so.23.0 00:03:16.766 [441/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.766 [442/743] Linking target lib/librte_power.so.23.0 00:03:16.766 [443/743] Linking target lib/librte_stack.so.23.0 00:03:17.024 [444/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.024 [445/743] Linking target lib/librte_regexdev.so.23.0 00:03:17.024 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:17.024 [447/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.024 [448/743] Linking static target lib/librte_security.a 00:03:17.024 [449/743] Linking target lib/librte_rib.so.23.0 00:03:17.282 [450/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:03:17.282 [451/743] Generating lib/rte_vhost_def with a custom command 00:03:17.282 [452/743] Generating lib/rte_vhost_mingw with a custom command 00:03:17.282 [453/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:17.282 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:17.540 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.540 [456/743] Linking target lib/librte_security.so.23.0 00:03:17.540 [457/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:17.798 [458/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:17.798 [459/743] Linking static target lib/librte_sched.a 00:03:17.798 [460/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:03:18.057 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.057 [462/743] Linking target lib/librte_sched.so.23.0 00:03:18.315 [463/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:18.315 [464/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:18.315 [465/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:18.315 [466/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:03:18.315 [467/743] Generating lib/rte_ipsec_def with a custom command 00:03:18.315 [468/743] Generating lib/rte_ipsec_mingw with a custom command 00:03:18.315 [469/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:18.573 [470/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:18.573 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:18.830 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:18.830 [473/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:03:18.830 [474/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:03:19.088 [475/743] Generating lib/rte_fib_def with a custom command 00:03:19.088 [476/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:03:19.088 [477/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:03:19.088 [478/743] Generating lib/rte_fib_mingw with a custom command 00:03:19.088 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:19.345 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:19.345 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:19.346 [482/743] Linking static target lib/librte_ipsec.a 00:03:19.603 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.860 [484/743] Linking target lib/librte_ipsec.so.23.0 00:03:19.860 [485/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:19.860 [486/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:19.860 [487/743] Linking static target lib/librte_fib.a 00:03:19.860 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:20.118 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:20.118 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:20.118 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:20.376 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.376 [493/743] Linking target lib/librte_fib.so.23.0 00:03:20.376 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:20.942 [495/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:20.942 [496/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:20.942 [497/743] Generating lib/rte_port_def with a custom command 00:03:20.942 [498/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:21.215 [499/743] Generating lib/rte_port_mingw with a custom command 00:03:21.215 [500/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:21.215 [501/743] Generating lib/rte_pdump_def with a custom command 00:03:21.215 [502/743] Generating lib/rte_pdump_mingw with a custom command 00:03:21.215 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:21.215 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:21.472 [505/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:21.472 [506/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:21.473 [507/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:21.473 [508/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:21.730 [509/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:21.730 [510/743] Linking static target lib/librte_port.a 00:03:21.988 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:21.988 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:22.246 [513/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:22.246 [514/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.246 [515/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:22.246 [516/743] Linking target lib/librte_port.so.23.0 00:03:22.246 [517/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:22.505 [518/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:22.505 [519/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:22.505 [520/743] Linking static target lib/librte_pdump.a 00:03:22.763 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.763 [522/743] Linking target lib/librte_pdump.so.23.0 00:03:22.763 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:22.763 [524/743] Generating lib/rte_table_def with a custom command 00:03:23.021 [525/743] Generating lib/rte_table_mingw with a custom command 00:03:23.021 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:23.021 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:23.021 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:23.279 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:23.279 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:23.536 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:23.536 [532/743] Generating lib/rte_pipeline_def with a custom command 00:03:23.536 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:03:23.799 [534/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:23.799 [535/743] Linking static target lib/librte_table.a 00:03:23.799 [536/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:24.074 [537/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:24.075 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:24.344 [539/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.344 [540/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:24.344 [541/743] Linking target lib/librte_table.so.23.0 00:03:24.602 [542/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:24.602 [543/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:24.602 [544/743] Generating lib/rte_graph_def with a custom command 00:03:24.602 [545/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:24.602 [546/743] Generating lib/rte_graph_mingw with a custom command 00:03:24.602 [547/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:24.859 [548/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:25.116 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:25.116 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:25.116 [551/743] Linking static target lib/librte_graph.a 00:03:25.374 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:25.374 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:25.374 [554/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:25.635 [555/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:25.893 [556/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:25.893 [557/743] Generating lib/rte_node_def with a custom command 00:03:25.893 [558/743] Generating lib/rte_node_mingw with a custom command 00:03:25.893 [559/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:26.151 [560/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:26.151 [561/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:26.151 [562/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.151 [563/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:26.151 [564/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:26.151 [565/743] Linking target lib/librte_graph.so.23.0 00:03:26.408 [566/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:26.408 [567/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:26.408 [568/743] Generating drivers/rte_bus_pci_def with a custom command 00:03:26.408 [569/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:26.408 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:26.408 [571/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:26.408 [572/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:26.408 [573/743] Generating drivers/rte_bus_vdev_def with a custom command 00:03:26.408 [574/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:26.408 [575/743] Generating drivers/rte_mempool_ring_def with a custom command 00:03:26.408 [576/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:26.665 [577/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:26.665 [578/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:26.665 [579/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:26.665 [580/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:26.665 [581/743] Linking static target lib/librte_node.a 00:03:26.923 [582/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:26.923 [583/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:26.923 [584/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:26.923 [585/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:26.923 [586/743] Linking static target drivers/librte_bus_vdev.a 00:03:26.923 [587/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.923 [588/743] Linking target lib/librte_node.so.23.0 00:03:26.923 [589/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:26.923 [590/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:27.181 [591/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:27.182 [592/743] Linking static target drivers/librte_bus_pci.a 00:03:27.182 [593/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.182 [594/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:27.182 [595/743] Linking target drivers/librte_bus_vdev.so.23.0 00:03:27.439 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:27.439 [597/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.439 [598/743] Linking target drivers/librte_bus_pci.so.23.0 00:03:27.439 [599/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:27.698 [600/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:27.698 [601/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:27.698 [602/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:27.698 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:27.698 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:27.956 [605/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:27.956 [606/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:27.956 [607/743] Linking static target drivers/librte_mempool_ring.a 00:03:27.956 [608/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:27.956 [609/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:28.215 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:03:28.474 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:29.038 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:29.038 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:29.038 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:29.296 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:29.554 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:29.554 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:29.812 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:30.071 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:30.329 [620/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:30.329 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:30.587 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:03:30.587 [623/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:30.587 [624/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:30.587 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:31.522 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:31.779 [627/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:31.779 [628/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:32.036 [629/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:32.036 [630/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:32.036 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:32.036 [632/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:32.293 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:32.293 [634/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:32.551 [635/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:32.551 [636/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:33.116 [637/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:33.116 [638/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:33.116 [639/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:33.116 [640/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:33.374 [641/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:33.374 [642/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:33.374 [643/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:33.632 [644/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:33.632 [645/743] Linking static target drivers/librte_net_i40e.a 00:03:33.632 [646/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:33.632 [647/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:33.890 [648/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:33.890 [649/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:34.148 [650/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:34.148 [651/743] Linking static target lib/librte_vhost.a 00:03:34.148 [652/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:34.148 [653/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.407 [654/743] Linking target drivers/librte_net_i40e.so.23.0 00:03:34.407 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:34.407 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:34.666 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:34.924 [658/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:35.182 [659/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:35.182 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:35.182 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:35.182 [662/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:35.182 [663/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:35.182 [664/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.440 [665/743] Linking target lib/librte_vhost.so.23.0 00:03:35.440 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:35.440 [667/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:35.440 [668/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:35.698 [669/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:35.957 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:35.957 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:36.215 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:36.215 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:36.813 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:36.813 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:37.080 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:37.080 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:37.338 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:37.338 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:37.338 [680/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:37.339 [681/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:37.596 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:37.855 [683/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:37.855 [684/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:37.855 [685/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:38.113 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:38.113 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:38.113 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:38.371 [689/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:38.371 [690/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:38.371 [691/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:38.629 [692/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:38.629 [693/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:38.629 [694/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:39.196 [695/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:39.196 [696/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:39.196 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:39.454 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:39.454 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:40.020 [700/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:40.020 [701/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:40.020 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:40.277 [703/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:40.536 [704/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:40.536 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:40.536 [706/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:40.536 [707/743] Linking static target lib/librte_pipeline.a 00:03:40.794 [708/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:41.052 [709/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:41.052 [710/743] Linking target app/dpdk-dumpcap 00:03:41.310 [711/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:41.310 [712/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:41.310 [713/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:41.310 [714/743] Linking target app/dpdk-pdump 00:03:41.568 [715/743] Linking target app/dpdk-proc-info 00:03:41.568 [716/743] Linking target app/dpdk-test-acl 00:03:41.826 [717/743] Linking target app/dpdk-test-cmdline 00:03:41.826 [718/743] Linking target app/dpdk-test-bbdev 00:03:41.826 [719/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:41.826 [720/743] Linking target app/dpdk-test-compress-perf 00:03:42.084 [721/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:42.084 [722/743] Linking target app/dpdk-test-crypto-perf 00:03:42.084 [723/743] Linking target app/dpdk-test-eventdev 00:03:42.084 [724/743] Linking target app/dpdk-test-fib 00:03:42.343 [725/743] Linking target app/dpdk-test-flow-perf 00:03:42.343 [726/743] Linking target app/dpdk-test-gpudev 00:03:42.343 [727/743] Linking target app/dpdk-test-pipeline 00:03:42.601 [728/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:42.859 [729/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:42.859 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:43.117 [731/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:43.117 [732/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:43.117 [733/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:43.375 [734/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:43.375 [735/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:43.375 [736/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.633 [737/743] Linking target lib/librte_pipeline.so.23.0 00:03:43.633 [738/743] Linking target app/dpdk-test-sad 00:03:43.633 [739/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:43.892 [740/743] Linking target app/dpdk-test-regex 00:03:44.149 [741/743] Linking target app/dpdk-testpmd 00:03:44.149 [742/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:44.407 [743/743] Linking target app/dpdk-test-security-perf 00:03:44.407 14:54:15 -- common/autobuild_common.sh@190 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:44.666 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:44.666 [0/1] Installing files. 00:03:44.927 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.927 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.928 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.929 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.930 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:44.931 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:44.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:44.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:44.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:44.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:44.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:44.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:44.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:44.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:44.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:44.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:44.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:44.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:44.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:44.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:44.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:44.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:44.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:44.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:44.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:44.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:44.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:44.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:44.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:44.932 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:44.932 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:44.932 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:44.932 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:44.932 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:44.932 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:44.932 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:44.932 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:44.932 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:44.932 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:44.932 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:44.932 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:44.932 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:44.932 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:44.932 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:44.932 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.191 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.191 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.191 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.191 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.191 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.191 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:45.192 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:45.192 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:45.192 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.192 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:45.192 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.192 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.192 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.192 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.192 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.192 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.192 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.192 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.192 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.454 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.454 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.454 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.454 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.454 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.454 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.454 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.454 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.454 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.454 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.454 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.454 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.454 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.454 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.454 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.454 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.454 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.454 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.455 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.456 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:45.457 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:45.457 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:45.457 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:45.457 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:45.457 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:45.457 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:45.457 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:45.457 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:45.457 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:45.457 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:45.457 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:45.457 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:45.457 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:45.457 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:45.457 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:45.457 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:45.457 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:45.457 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:45.457 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:45.457 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:45.458 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:45.458 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:45.458 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:45.458 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:45.458 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:45.458 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:45.458 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:45.458 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:45.458 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:45.458 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:45.458 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:45.458 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:45.458 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:45.458 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:45.458 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:45.458 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:45.458 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:45.458 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:45.458 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:45.458 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:45.458 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:45.458 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:45.458 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:45.458 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:45.458 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:45.458 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:45.458 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:45.458 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:45.458 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:45.458 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:45.458 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:45.458 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:45.458 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:45.458 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:45.458 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:45.458 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:45.458 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:45.458 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:45.458 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:45.458 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:45.458 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:45.458 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:45.458 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:45.458 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:45.458 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:45.458 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:45.458 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:45.458 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:45.458 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:45.458 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:45.458 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:45.458 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:45.458 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:45.458 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:45.458 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:45.458 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:45.458 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:45.458 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:45.458 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:45.458 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:45.458 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:45.458 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:45.458 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:45.458 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:45.458 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:45.458 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:45.458 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:45.458 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:45.458 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:45.458 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:45.458 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:45.458 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:45.458 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:45.458 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:45.458 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:45.458 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:45.458 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:45.458 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:45.458 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:45.458 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:45.458 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:45.458 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:45.458 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:45.458 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:45.458 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:45.458 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:45.458 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:45.458 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:45.458 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:45.458 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:45.458 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:45.458 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:45.458 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:45.458 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:45.458 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:45.458 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:45.458 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:45.458 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:45.458 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:45.458 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:45.458 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:45.458 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:45.458 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:45.458 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:45.458 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:45.458 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:45.458 14:54:16 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:45.458 14:54:16 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:45.458 14:54:16 -- common/autobuild_common.sh@203 -- $ cat 00:03:45.459 14:54:16 -- common/autobuild_common.sh@208 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:45.459 00:03:45.459 real 0m53.362s 00:03:45.459 user 6m20.930s 00:03:45.459 sys 0m56.809s 00:03:45.459 14:54:16 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:45.459 ************************************ 00:03:45.459 14:54:16 -- common/autotest_common.sh@10 -- $ set +x 00:03:45.459 END TEST build_native_dpdk 00:03:45.459 ************************************ 00:03:45.459 14:54:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:45.459 14:54:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:45.459 14:54:16 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:45.459 14:54:16 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:45.459 14:54:16 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:45.459 14:54:16 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:45.459 14:54:16 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:45.459 14:54:16 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:45.718 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:45.718 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:45.718 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:45.718 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:46.285 Using 'verbs' RDMA provider 00:03:59.055 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:04:13.935 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:04:13.935 Creating mk/config.mk...done. 00:04:13.935 Creating mk/cc.flags.mk...done. 00:04:13.935 Type 'make' to build. 00:04:13.935 14:54:42 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:04:13.935 14:54:42 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:04:13.935 14:54:42 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:04:13.935 14:54:42 -- common/autotest_common.sh@10 -- $ set +x 00:04:13.935 ************************************ 00:04:13.935 START TEST make 00:04:13.935 ************************************ 00:04:13.935 14:54:42 -- common/autotest_common.sh@1114 -- $ make -j10 00:04:13.935 make[1]: Nothing to be done for 'all'. 00:04:35.868 CC lib/log/log.o 00:04:35.868 CC lib/ut_mock/mock.o 00:04:35.868 CC lib/log/log_flags.o 00:04:35.868 CC lib/log/log_deprecated.o 00:04:35.868 CC lib/ut/ut.o 00:04:35.868 LIB libspdk_ut.a 00:04:35.868 LIB libspdk_ut_mock.a 00:04:35.868 LIB libspdk_log.a 00:04:35.868 SO libspdk_ut_mock.so.5.0 00:04:35.868 SO libspdk_ut.so.1.0 00:04:35.868 SO libspdk_log.so.6.1 00:04:35.868 SYMLINK libspdk_ut.so 00:04:35.868 SYMLINK libspdk_ut_mock.so 00:04:35.868 SYMLINK libspdk_log.so 00:04:35.868 CC lib/util/base64.o 00:04:35.868 CC lib/util/bit_array.o 00:04:35.868 CC lib/util/crc16.o 00:04:35.868 CC lib/util/cpuset.o 00:04:35.868 CC lib/util/crc32.o 00:04:35.868 CC lib/util/crc32c.o 00:04:35.868 CC lib/dma/dma.o 00:04:35.868 CC lib/ioat/ioat.o 00:04:35.868 CXX lib/trace_parser/trace.o 00:04:35.868 CC lib/vfio_user/host/vfio_user_pci.o 00:04:35.868 CC lib/util/crc32_ieee.o 00:04:35.868 CC lib/util/crc64.o 00:04:35.868 CC lib/util/dif.o 00:04:35.868 CC lib/util/fd.o 00:04:35.868 LIB libspdk_dma.a 00:04:35.868 CC lib/util/file.o 00:04:35.868 SO libspdk_dma.so.3.0 00:04:35.868 CC lib/util/hexlify.o 00:04:35.868 CC lib/vfio_user/host/vfio_user.o 00:04:35.868 SYMLINK libspdk_dma.so 00:04:35.868 CC lib/util/iov.o 00:04:35.868 CC lib/util/math.o 00:04:35.868 LIB libspdk_ioat.a 00:04:35.868 CC lib/util/pipe.o 00:04:35.868 SO libspdk_ioat.so.6.0 00:04:35.868 CC lib/util/strerror_tls.o 00:04:35.868 CC lib/util/string.o 00:04:35.868 CC lib/util/uuid.o 00:04:35.868 SYMLINK libspdk_ioat.so 00:04:35.868 CC lib/util/fd_group.o 00:04:35.868 CC lib/util/xor.o 00:04:35.868 CC lib/util/zipf.o 00:04:35.868 LIB libspdk_vfio_user.a 00:04:35.868 SO libspdk_vfio_user.so.4.0 00:04:35.868 SYMLINK libspdk_vfio_user.so 00:04:35.868 LIB libspdk_util.a 00:04:35.868 SO libspdk_util.so.8.0 00:04:35.868 SYMLINK libspdk_util.so 00:04:35.868 LIB libspdk_trace_parser.a 00:04:35.868 SO libspdk_trace_parser.so.4.0 00:04:35.868 CC lib/idxd/idxd.o 00:04:35.868 CC lib/idxd/idxd_user.o 00:04:35.868 CC lib/idxd/idxd_kernel.o 00:04:35.868 CC lib/env_dpdk/memory.o 00:04:35.868 CC lib/env_dpdk/env.o 00:04:35.868 CC lib/json/json_parse.o 00:04:35.868 CC lib/conf/conf.o 00:04:35.868 CC lib/vmd/vmd.o 00:04:35.868 CC lib/rdma/common.o 00:04:35.868 SYMLINK libspdk_trace_parser.so 00:04:35.868 CC lib/rdma/rdma_verbs.o 00:04:35.868 CC lib/env_dpdk/pci.o 00:04:36.127 LIB libspdk_conf.a 00:04:36.127 CC lib/env_dpdk/init.o 00:04:36.127 CC lib/env_dpdk/threads.o 00:04:36.127 CC lib/json/json_util.o 00:04:36.127 SO libspdk_conf.so.5.0 00:04:36.127 LIB libspdk_rdma.a 00:04:36.127 SYMLINK libspdk_conf.so 00:04:36.127 CC lib/json/json_write.o 00:04:36.127 SO libspdk_rdma.so.5.0 00:04:36.127 CC lib/env_dpdk/pci_ioat.o 00:04:36.127 CC lib/env_dpdk/pci_virtio.o 00:04:36.127 SYMLINK libspdk_rdma.so 00:04:36.127 CC lib/env_dpdk/pci_vmd.o 00:04:36.385 CC lib/env_dpdk/pci_idxd.o 00:04:36.385 CC lib/env_dpdk/pci_event.o 00:04:36.385 CC lib/env_dpdk/sigbus_handler.o 00:04:36.385 CC lib/vmd/led.o 00:04:36.385 LIB libspdk_idxd.a 00:04:36.385 CC lib/env_dpdk/pci_dpdk.o 00:04:36.385 SO libspdk_idxd.so.11.0 00:04:36.386 LIB libspdk_json.a 00:04:36.386 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:36.386 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:36.386 SO libspdk_json.so.5.1 00:04:36.386 SYMLINK libspdk_idxd.so 00:04:36.386 LIB libspdk_vmd.a 00:04:36.644 SYMLINK libspdk_json.so 00:04:36.644 SO libspdk_vmd.so.5.0 00:04:36.644 SYMLINK libspdk_vmd.so 00:04:36.644 CC lib/jsonrpc/jsonrpc_server.o 00:04:36.644 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:36.644 CC lib/jsonrpc/jsonrpc_client.o 00:04:36.644 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:36.903 LIB libspdk_jsonrpc.a 00:04:36.903 SO libspdk_jsonrpc.so.5.1 00:04:37.162 SYMLINK libspdk_jsonrpc.so 00:04:37.162 LIB libspdk_env_dpdk.a 00:04:37.162 CC lib/rpc/rpc.o 00:04:37.457 SO libspdk_env_dpdk.so.13.0 00:04:37.457 LIB libspdk_rpc.a 00:04:37.457 SYMLINK libspdk_env_dpdk.so 00:04:37.457 SO libspdk_rpc.so.5.0 00:04:37.457 SYMLINK libspdk_rpc.so 00:04:37.725 CC lib/trace/trace.o 00:04:37.725 CC lib/trace/trace_flags.o 00:04:37.725 CC lib/notify/notify.o 00:04:37.725 CC lib/sock/sock_rpc.o 00:04:37.725 CC lib/trace/trace_rpc.o 00:04:37.725 CC lib/notify/notify_rpc.o 00:04:37.725 CC lib/sock/sock.o 00:04:37.997 LIB libspdk_notify.a 00:04:37.997 SO libspdk_notify.so.5.0 00:04:37.997 LIB libspdk_trace.a 00:04:37.997 SO libspdk_trace.so.9.0 00:04:37.997 SYMLINK libspdk_notify.so 00:04:37.997 SYMLINK libspdk_trace.so 00:04:37.997 LIB libspdk_sock.a 00:04:37.997 SO libspdk_sock.so.8.0 00:04:38.254 SYMLINK libspdk_sock.so 00:04:38.254 CC lib/thread/thread.o 00:04:38.254 CC lib/thread/iobuf.o 00:04:38.254 CC lib/nvme/nvme_ctrlr.o 00:04:38.254 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:38.254 CC lib/nvme/nvme_fabric.o 00:04:38.254 CC lib/nvme/nvme_ns_cmd.o 00:04:38.254 CC lib/nvme/nvme_pcie_common.o 00:04:38.254 CC lib/nvme/nvme_ns.o 00:04:38.254 CC lib/nvme/nvme_pcie.o 00:04:38.254 CC lib/nvme/nvme_qpair.o 00:04:38.512 CC lib/nvme/nvme.o 00:04:39.078 CC lib/nvme/nvme_quirks.o 00:04:39.078 CC lib/nvme/nvme_transport.o 00:04:39.078 CC lib/nvme/nvme_discovery.o 00:04:39.337 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:39.337 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:39.337 CC lib/nvme/nvme_tcp.o 00:04:39.337 CC lib/nvme/nvme_opal.o 00:04:39.597 CC lib/nvme/nvme_io_msg.o 00:04:39.597 CC lib/nvme/nvme_poll_group.o 00:04:39.856 CC lib/nvme/nvme_zns.o 00:04:39.856 LIB libspdk_thread.a 00:04:39.856 SO libspdk_thread.so.9.0 00:04:39.856 CC lib/nvme/nvme_cuse.o 00:04:39.856 SYMLINK libspdk_thread.so 00:04:39.856 CC lib/nvme/nvme_vfio_user.o 00:04:39.856 CC lib/nvme/nvme_rdma.o 00:04:40.115 CC lib/accel/accel.o 00:04:40.115 CC lib/blob/blobstore.o 00:04:40.115 CC lib/blob/request.o 00:04:40.372 CC lib/blob/zeroes.o 00:04:40.372 CC lib/blob/blob_bs_dev.o 00:04:40.638 CC lib/init/json_config.o 00:04:40.638 CC lib/accel/accel_rpc.o 00:04:40.638 CC lib/accel/accel_sw.o 00:04:40.638 CC lib/virtio/virtio.o 00:04:40.638 CC lib/virtio/virtio_vhost_user.o 00:04:40.638 CC lib/virtio/virtio_vfio_user.o 00:04:40.638 CC lib/virtio/virtio_pci.o 00:04:40.638 CC lib/init/subsystem.o 00:04:40.896 CC lib/init/subsystem_rpc.o 00:04:40.896 CC lib/init/rpc.o 00:04:40.896 LIB libspdk_accel.a 00:04:40.896 LIB libspdk_init.a 00:04:41.155 LIB libspdk_virtio.a 00:04:41.155 SO libspdk_accel.so.14.0 00:04:41.155 SO libspdk_init.so.4.0 00:04:41.155 SO libspdk_virtio.so.6.0 00:04:41.155 SYMLINK libspdk_init.so 00:04:41.155 SYMLINK libspdk_accel.so 00:04:41.155 SYMLINK libspdk_virtio.so 00:04:41.155 LIB libspdk_nvme.a 00:04:41.155 CC lib/event/app.o 00:04:41.155 CC lib/event/reactor.o 00:04:41.155 CC lib/event/log_rpc.o 00:04:41.155 CC lib/event/app_rpc.o 00:04:41.155 CC lib/event/scheduler_static.o 00:04:41.155 CC lib/bdev/bdev.o 00:04:41.415 CC lib/bdev/bdev_rpc.o 00:04:41.415 CC lib/bdev/bdev_zone.o 00:04:41.415 CC lib/bdev/part.o 00:04:41.415 SO libspdk_nvme.so.12.0 00:04:41.415 CC lib/bdev/scsi_nvme.o 00:04:41.673 SYMLINK libspdk_nvme.so 00:04:41.673 LIB libspdk_event.a 00:04:41.673 SO libspdk_event.so.12.0 00:04:41.932 SYMLINK libspdk_event.so 00:04:42.868 LIB libspdk_blob.a 00:04:42.868 SO libspdk_blob.so.10.1 00:04:42.868 SYMLINK libspdk_blob.so 00:04:43.127 CC lib/blobfs/blobfs.o 00:04:43.127 CC lib/blobfs/tree.o 00:04:43.127 CC lib/lvol/lvol.o 00:04:44.061 LIB libspdk_blobfs.a 00:04:44.061 LIB libspdk_bdev.a 00:04:44.061 SO libspdk_blobfs.so.9.0 00:04:44.061 SO libspdk_bdev.so.14.0 00:04:44.061 SYMLINK libspdk_blobfs.so 00:04:44.061 LIB libspdk_lvol.a 00:04:44.061 SYMLINK libspdk_bdev.so 00:04:44.061 SO libspdk_lvol.so.9.1 00:04:44.061 SYMLINK libspdk_lvol.so 00:04:44.061 CC lib/ftl/ftl_core.o 00:04:44.061 CC lib/nbd/nbd.o 00:04:44.061 CC lib/nbd/nbd_rpc.o 00:04:44.061 CC lib/ftl/ftl_init.o 00:04:44.061 CC lib/ftl/ftl_layout.o 00:04:44.061 CC lib/ublk/ublk.o 00:04:44.061 CC lib/nvmf/ctrlr.o 00:04:44.061 CC lib/ublk/ublk_rpc.o 00:04:44.061 CC lib/scsi/dev.o 00:04:44.061 CC lib/ftl/ftl_debug.o 00:04:44.320 CC lib/ftl/ftl_io.o 00:04:44.320 CC lib/ftl/ftl_sb.o 00:04:44.320 CC lib/ftl/ftl_l2p.o 00:04:44.320 CC lib/nvmf/ctrlr_discovery.o 00:04:44.580 CC lib/scsi/lun.o 00:04:44.580 CC lib/ftl/ftl_l2p_flat.o 00:04:44.580 LIB libspdk_nbd.a 00:04:44.580 CC lib/nvmf/ctrlr_bdev.o 00:04:44.580 CC lib/ftl/ftl_nv_cache.o 00:04:44.580 CC lib/scsi/port.o 00:04:44.580 CC lib/ftl/ftl_band.o 00:04:44.580 SO libspdk_nbd.so.6.0 00:04:44.580 SYMLINK libspdk_nbd.so 00:04:44.580 CC lib/nvmf/subsystem.o 00:04:44.837 CC lib/nvmf/nvmf.o 00:04:44.837 CC lib/scsi/scsi.o 00:04:44.837 CC lib/ftl/ftl_band_ops.o 00:04:44.837 LIB libspdk_ublk.a 00:04:44.837 SO libspdk_ublk.so.2.0 00:04:44.837 CC lib/nvmf/nvmf_rpc.o 00:04:44.837 SYMLINK libspdk_ublk.so 00:04:44.837 CC lib/scsi/scsi_bdev.o 00:04:44.837 CC lib/scsi/scsi_pr.o 00:04:45.098 CC lib/scsi/scsi_rpc.o 00:04:45.098 CC lib/scsi/task.o 00:04:45.098 CC lib/nvmf/transport.o 00:04:45.356 CC lib/nvmf/tcp.o 00:04:45.356 CC lib/nvmf/rdma.o 00:04:45.356 CC lib/ftl/ftl_writer.o 00:04:45.356 LIB libspdk_scsi.a 00:04:45.356 SO libspdk_scsi.so.8.0 00:04:45.617 CC lib/ftl/ftl_rq.o 00:04:45.617 SYMLINK libspdk_scsi.so 00:04:45.617 CC lib/ftl/ftl_reloc.o 00:04:45.617 CC lib/ftl/ftl_l2p_cache.o 00:04:45.617 CC lib/ftl/ftl_p2l.o 00:04:45.876 CC lib/iscsi/conn.o 00:04:45.876 CC lib/vhost/vhost.o 00:04:45.876 CC lib/vhost/vhost_rpc.o 00:04:45.876 CC lib/vhost/vhost_scsi.o 00:04:45.876 CC lib/vhost/vhost_blk.o 00:04:45.876 CC lib/vhost/rte_vhost_user.o 00:04:46.135 CC lib/ftl/mngt/ftl_mngt.o 00:04:46.135 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:46.395 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:46.395 CC lib/iscsi/init_grp.o 00:04:46.395 CC lib/iscsi/iscsi.o 00:04:46.395 CC lib/iscsi/md5.o 00:04:46.655 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:46.655 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:46.655 CC lib/iscsi/param.o 00:04:46.655 CC lib/iscsi/portal_grp.o 00:04:46.655 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:46.913 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:46.913 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:46.913 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:46.913 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:46.913 CC lib/iscsi/tgt_node.o 00:04:46.913 CC lib/iscsi/iscsi_subsystem.o 00:04:46.913 CC lib/iscsi/iscsi_rpc.o 00:04:46.913 LIB libspdk_vhost.a 00:04:46.913 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:46.913 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:47.172 SO libspdk_vhost.so.7.1 00:04:47.172 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:47.172 SYMLINK libspdk_vhost.so 00:04:47.172 CC lib/ftl/utils/ftl_conf.o 00:04:47.172 CC lib/iscsi/task.o 00:04:47.172 CC lib/ftl/utils/ftl_md.o 00:04:47.430 CC lib/ftl/utils/ftl_mempool.o 00:04:47.430 CC lib/ftl/utils/ftl_bitmap.o 00:04:47.430 LIB libspdk_nvmf.a 00:04:47.430 CC lib/ftl/utils/ftl_property.o 00:04:47.430 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:47.430 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:47.430 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:47.430 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:47.430 SO libspdk_nvmf.so.17.0 00:04:47.430 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:47.430 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:47.689 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:47.689 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:47.689 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:47.689 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:47.689 CC lib/ftl/base/ftl_base_dev.o 00:04:47.689 SYMLINK libspdk_nvmf.so 00:04:47.689 CC lib/ftl/base/ftl_base_bdev.o 00:04:47.689 CC lib/ftl/ftl_trace.o 00:04:47.947 LIB libspdk_iscsi.a 00:04:47.947 LIB libspdk_ftl.a 00:04:47.947 SO libspdk_iscsi.so.7.0 00:04:48.206 SYMLINK libspdk_iscsi.so 00:04:48.206 SO libspdk_ftl.so.8.0 00:04:48.466 SYMLINK libspdk_ftl.so 00:04:48.725 CC module/env_dpdk/env_dpdk_rpc.o 00:04:48.725 CC module/scheduler/gscheduler/gscheduler.o 00:04:48.725 CC module/accel/ioat/accel_ioat.o 00:04:48.725 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:48.725 CC module/sock/posix/posix.o 00:04:48.725 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:48.725 CC module/accel/dsa/accel_dsa.o 00:04:48.725 CC module/accel/error/accel_error.o 00:04:48.725 CC module/blob/bdev/blob_bdev.o 00:04:48.725 CC module/accel/iaa/accel_iaa.o 00:04:48.984 LIB libspdk_env_dpdk_rpc.a 00:04:48.984 SO libspdk_env_dpdk_rpc.so.5.0 00:04:48.984 LIB libspdk_scheduler_gscheduler.a 00:04:48.984 LIB libspdk_scheduler_dpdk_governor.a 00:04:48.984 SYMLINK libspdk_env_dpdk_rpc.so 00:04:48.984 CC module/accel/iaa/accel_iaa_rpc.o 00:04:48.984 SO libspdk_scheduler_gscheduler.so.3.0 00:04:48.984 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:48.984 CC module/accel/ioat/accel_ioat_rpc.o 00:04:48.984 CC module/accel/error/accel_error_rpc.o 00:04:48.984 SYMLINK libspdk_scheduler_gscheduler.so 00:04:48.984 CC module/accel/dsa/accel_dsa_rpc.o 00:04:48.984 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:48.984 LIB libspdk_scheduler_dynamic.a 00:04:48.984 SO libspdk_scheduler_dynamic.so.3.0 00:04:48.984 LIB libspdk_accel_iaa.a 00:04:48.984 LIB libspdk_blob_bdev.a 00:04:48.984 LIB libspdk_accel_ioat.a 00:04:49.243 SO libspdk_accel_iaa.so.2.0 00:04:49.243 SYMLINK libspdk_scheduler_dynamic.so 00:04:49.243 SO libspdk_blob_bdev.so.10.1 00:04:49.243 CC module/sock/uring/uring.o 00:04:49.243 SO libspdk_accel_ioat.so.5.0 00:04:49.243 LIB libspdk_accel_error.a 00:04:49.243 SYMLINK libspdk_accel_iaa.so 00:04:49.243 LIB libspdk_accel_dsa.a 00:04:49.243 SO libspdk_accel_error.so.1.0 00:04:49.243 SYMLINK libspdk_blob_bdev.so 00:04:49.243 SYMLINK libspdk_accel_ioat.so 00:04:49.243 SO libspdk_accel_dsa.so.4.0 00:04:49.243 SYMLINK libspdk_accel_error.so 00:04:49.243 SYMLINK libspdk_accel_dsa.so 00:04:49.502 CC module/blobfs/bdev/blobfs_bdev.o 00:04:49.502 CC module/bdev/delay/vbdev_delay.o 00:04:49.502 CC module/bdev/null/bdev_null.o 00:04:49.502 CC module/bdev/gpt/gpt.o 00:04:49.502 CC module/bdev/lvol/vbdev_lvol.o 00:04:49.502 CC module/bdev/error/vbdev_error.o 00:04:49.502 CC module/bdev/malloc/bdev_malloc.o 00:04:49.502 CC module/bdev/nvme/bdev_nvme.o 00:04:49.502 LIB libspdk_sock_posix.a 00:04:49.502 SO libspdk_sock_posix.so.5.0 00:04:49.502 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:49.502 CC module/bdev/gpt/vbdev_gpt.o 00:04:49.761 SYMLINK libspdk_sock_posix.so 00:04:49.761 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:49.761 CC module/bdev/null/bdev_null_rpc.o 00:04:49.761 CC module/bdev/error/vbdev_error_rpc.o 00:04:49.761 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:49.761 LIB libspdk_blobfs_bdev.a 00:04:49.761 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:49.761 SO libspdk_blobfs_bdev.so.5.0 00:04:49.761 LIB libspdk_bdev_null.a 00:04:49.761 LIB libspdk_bdev_error.a 00:04:50.020 SO libspdk_bdev_null.so.5.0 00:04:50.020 SO libspdk_bdev_error.so.5.0 00:04:50.020 LIB libspdk_bdev_gpt.a 00:04:50.020 SYMLINK libspdk_blobfs_bdev.so 00:04:50.020 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:50.020 LIB libspdk_sock_uring.a 00:04:50.020 SO libspdk_bdev_gpt.so.5.0 00:04:50.020 CC module/bdev/nvme/nvme_rpc.o 00:04:50.020 SYMLINK libspdk_bdev_null.so 00:04:50.020 LIB libspdk_bdev_delay.a 00:04:50.020 SYMLINK libspdk_bdev_error.so 00:04:50.020 SO libspdk_sock_uring.so.4.0 00:04:50.020 LIB libspdk_bdev_lvol.a 00:04:50.020 SO libspdk_bdev_delay.so.5.0 00:04:50.020 LIB libspdk_bdev_malloc.a 00:04:50.020 SO libspdk_bdev_lvol.so.5.0 00:04:50.020 SYMLINK libspdk_bdev_gpt.so 00:04:50.020 SO libspdk_bdev_malloc.so.5.0 00:04:50.020 SYMLINK libspdk_sock_uring.so 00:04:50.020 CC module/bdev/nvme/bdev_mdns_client.o 00:04:50.020 SYMLINK libspdk_bdev_delay.so 00:04:50.020 CC module/bdev/passthru/vbdev_passthru.o 00:04:50.020 CC module/bdev/raid/bdev_raid.o 00:04:50.020 CC module/bdev/nvme/vbdev_opal.o 00:04:50.020 SYMLINK libspdk_bdev_lvol.so 00:04:50.020 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:50.020 SYMLINK libspdk_bdev_malloc.so 00:04:50.279 CC module/bdev/split/vbdev_split.o 00:04:50.279 CC module/bdev/split/vbdev_split_rpc.o 00:04:50.279 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:50.279 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:50.279 CC module/bdev/raid/bdev_raid_rpc.o 00:04:50.279 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:50.279 LIB libspdk_bdev_passthru.a 00:04:50.279 LIB libspdk_bdev_split.a 00:04:50.538 SO libspdk_bdev_passthru.so.5.0 00:04:50.538 SO libspdk_bdev_split.so.5.0 00:04:50.538 SYMLINK libspdk_bdev_passthru.so 00:04:50.538 CC module/bdev/raid/bdev_raid_sb.o 00:04:50.538 CC module/bdev/raid/raid0.o 00:04:50.538 CC module/bdev/raid/raid1.o 00:04:50.538 CC module/bdev/uring/bdev_uring.o 00:04:50.538 SYMLINK libspdk_bdev_split.so 00:04:50.538 CC module/bdev/uring/bdev_uring_rpc.o 00:04:50.538 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:50.538 CC module/bdev/raid/concat.o 00:04:50.797 LIB libspdk_bdev_zone_block.a 00:04:50.797 CC module/bdev/aio/bdev_aio.o 00:04:50.797 CC module/bdev/aio/bdev_aio_rpc.o 00:04:50.797 SO libspdk_bdev_zone_block.so.5.0 00:04:50.798 SYMLINK libspdk_bdev_zone_block.so 00:04:50.798 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:50.798 CC module/bdev/ftl/bdev_ftl.o 00:04:50.798 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:50.798 CC module/bdev/iscsi/bdev_iscsi.o 00:04:50.798 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:50.798 LIB libspdk_bdev_uring.a 00:04:50.798 SO libspdk_bdev_uring.so.5.0 00:04:50.798 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:51.057 SYMLINK libspdk_bdev_uring.so 00:04:51.057 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:51.057 LIB libspdk_bdev_raid.a 00:04:51.057 SO libspdk_bdev_raid.so.5.0 00:04:51.057 LIB libspdk_bdev_aio.a 00:04:51.057 SO libspdk_bdev_aio.so.5.0 00:04:51.057 LIB libspdk_bdev_ftl.a 00:04:51.057 SYMLINK libspdk_bdev_raid.so 00:04:51.057 SO libspdk_bdev_ftl.so.5.0 00:04:51.057 SYMLINK libspdk_bdev_aio.so 00:04:51.315 LIB libspdk_bdev_iscsi.a 00:04:51.315 SYMLINK libspdk_bdev_ftl.so 00:04:51.315 SO libspdk_bdev_iscsi.so.5.0 00:04:51.315 SYMLINK libspdk_bdev_iscsi.so 00:04:51.315 LIB libspdk_bdev_virtio.a 00:04:51.315 SO libspdk_bdev_virtio.so.5.0 00:04:51.574 SYMLINK libspdk_bdev_virtio.so 00:04:51.835 LIB libspdk_bdev_nvme.a 00:04:51.835 SO libspdk_bdev_nvme.so.6.0 00:04:51.835 SYMLINK libspdk_bdev_nvme.so 00:04:52.404 CC module/event/subsystems/scheduler/scheduler.o 00:04:52.404 CC module/event/subsystems/iobuf/iobuf.o 00:04:52.404 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:52.404 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:52.404 CC module/event/subsystems/sock/sock.o 00:04:52.404 CC module/event/subsystems/vmd/vmd.o 00:04:52.404 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:52.404 LIB libspdk_event_scheduler.a 00:04:52.404 LIB libspdk_event_sock.a 00:04:52.404 SO libspdk_event_scheduler.so.3.0 00:04:52.404 LIB libspdk_event_vhost_blk.a 00:04:52.404 LIB libspdk_event_iobuf.a 00:04:52.404 SO libspdk_event_sock.so.4.0 00:04:52.404 SO libspdk_event_vhost_blk.so.2.0 00:04:52.404 LIB libspdk_event_vmd.a 00:04:52.404 SO libspdk_event_iobuf.so.2.0 00:04:52.404 SYMLINK libspdk_event_scheduler.so 00:04:52.404 SO libspdk_event_vmd.so.5.0 00:04:52.404 SYMLINK libspdk_event_vhost_blk.so 00:04:52.404 SYMLINK libspdk_event_sock.so 00:04:52.663 SYMLINK libspdk_event_iobuf.so 00:04:52.663 SYMLINK libspdk_event_vmd.so 00:04:52.663 CC module/event/subsystems/accel/accel.o 00:04:52.922 LIB libspdk_event_accel.a 00:04:52.922 SO libspdk_event_accel.so.5.0 00:04:52.922 SYMLINK libspdk_event_accel.so 00:04:53.189 CC module/event/subsystems/bdev/bdev.o 00:04:53.450 LIB libspdk_event_bdev.a 00:04:53.450 SO libspdk_event_bdev.so.5.0 00:04:53.450 SYMLINK libspdk_event_bdev.so 00:04:53.709 CC module/event/subsystems/scsi/scsi.o 00:04:53.709 CC module/event/subsystems/ublk/ublk.o 00:04:53.709 CC module/event/subsystems/nbd/nbd.o 00:04:53.709 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:53.709 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:53.709 LIB libspdk_event_ublk.a 00:04:53.709 LIB libspdk_event_nbd.a 00:04:53.709 LIB libspdk_event_scsi.a 00:04:53.981 SO libspdk_event_ublk.so.2.0 00:04:53.981 SO libspdk_event_nbd.so.5.0 00:04:53.981 SO libspdk_event_scsi.so.5.0 00:04:53.981 SYMLINK libspdk_event_nbd.so 00:04:53.981 SYMLINK libspdk_event_ublk.so 00:04:53.981 LIB libspdk_event_nvmf.a 00:04:53.981 SYMLINK libspdk_event_scsi.so 00:04:53.981 SO libspdk_event_nvmf.so.5.0 00:04:53.981 SYMLINK libspdk_event_nvmf.so 00:04:53.981 CC module/event/subsystems/iscsi/iscsi.o 00:04:54.292 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:54.292 LIB libspdk_event_vhost_scsi.a 00:04:54.292 LIB libspdk_event_iscsi.a 00:04:54.292 SO libspdk_event_vhost_scsi.so.2.0 00:04:54.292 SO libspdk_event_iscsi.so.5.0 00:04:54.555 SYMLINK libspdk_event_iscsi.so 00:04:54.555 SYMLINK libspdk_event_vhost_scsi.so 00:04:54.555 SO libspdk.so.5.0 00:04:54.555 SYMLINK libspdk.so 00:04:54.814 CC app/trace_record/trace_record.o 00:04:54.814 CXX app/trace/trace.o 00:04:54.814 CC app/nvmf_tgt/nvmf_main.o 00:04:54.814 CC app/iscsi_tgt/iscsi_tgt.o 00:04:54.814 CC examples/accel/perf/accel_perf.o 00:04:54.814 CC test/blobfs/mkfs/mkfs.o 00:04:54.814 CC test/app/bdev_svc/bdev_svc.o 00:04:54.814 CC test/accel/dif/dif.o 00:04:54.814 CC test/bdev/bdevio/bdevio.o 00:04:54.814 CC examples/bdev/hello_world/hello_bdev.o 00:04:55.074 LINK nvmf_tgt 00:04:55.074 LINK spdk_trace_record 00:04:55.074 LINK iscsi_tgt 00:04:55.074 LINK bdev_svc 00:04:55.074 LINK mkfs 00:04:55.074 LINK hello_bdev 00:04:55.074 LINK spdk_trace 00:04:55.333 TEST_HEADER include/spdk/accel.h 00:04:55.333 TEST_HEADER include/spdk/accel_module.h 00:04:55.333 TEST_HEADER include/spdk/assert.h 00:04:55.333 TEST_HEADER include/spdk/barrier.h 00:04:55.333 LINK dif 00:04:55.333 TEST_HEADER include/spdk/base64.h 00:04:55.333 TEST_HEADER include/spdk/bdev.h 00:04:55.333 TEST_HEADER include/spdk/bdev_module.h 00:04:55.333 TEST_HEADER include/spdk/bdev_zone.h 00:04:55.333 TEST_HEADER include/spdk/bit_array.h 00:04:55.333 TEST_HEADER include/spdk/bit_pool.h 00:04:55.333 TEST_HEADER include/spdk/blob_bdev.h 00:04:55.333 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:55.333 TEST_HEADER include/spdk/blobfs.h 00:04:55.333 LINK accel_perf 00:04:55.333 TEST_HEADER include/spdk/blob.h 00:04:55.333 CC examples/bdev/bdevperf/bdevperf.o 00:04:55.333 TEST_HEADER include/spdk/conf.h 00:04:55.333 TEST_HEADER include/spdk/config.h 00:04:55.333 TEST_HEADER include/spdk/cpuset.h 00:04:55.333 TEST_HEADER include/spdk/crc16.h 00:04:55.333 TEST_HEADER include/spdk/crc32.h 00:04:55.333 TEST_HEADER include/spdk/crc64.h 00:04:55.333 TEST_HEADER include/spdk/dif.h 00:04:55.333 TEST_HEADER include/spdk/dma.h 00:04:55.333 TEST_HEADER include/spdk/endian.h 00:04:55.333 LINK bdevio 00:04:55.333 TEST_HEADER include/spdk/env_dpdk.h 00:04:55.333 TEST_HEADER include/spdk/env.h 00:04:55.333 TEST_HEADER include/spdk/event.h 00:04:55.333 TEST_HEADER include/spdk/fd_group.h 00:04:55.333 TEST_HEADER include/spdk/fd.h 00:04:55.333 TEST_HEADER include/spdk/file.h 00:04:55.333 TEST_HEADER include/spdk/ftl.h 00:04:55.333 TEST_HEADER include/spdk/gpt_spec.h 00:04:55.333 TEST_HEADER include/spdk/hexlify.h 00:04:55.333 TEST_HEADER include/spdk/histogram_data.h 00:04:55.333 TEST_HEADER include/spdk/idxd.h 00:04:55.333 TEST_HEADER include/spdk/idxd_spec.h 00:04:55.333 TEST_HEADER include/spdk/init.h 00:04:55.333 TEST_HEADER include/spdk/ioat.h 00:04:55.333 TEST_HEADER include/spdk/ioat_spec.h 00:04:55.333 TEST_HEADER include/spdk/iscsi_spec.h 00:04:55.333 TEST_HEADER include/spdk/json.h 00:04:55.333 TEST_HEADER include/spdk/jsonrpc.h 00:04:55.333 TEST_HEADER include/spdk/likely.h 00:04:55.333 TEST_HEADER include/spdk/log.h 00:04:55.333 TEST_HEADER include/spdk/lvol.h 00:04:55.333 TEST_HEADER include/spdk/memory.h 00:04:55.333 TEST_HEADER include/spdk/mmio.h 00:04:55.333 TEST_HEADER include/spdk/nbd.h 00:04:55.333 TEST_HEADER include/spdk/notify.h 00:04:55.333 TEST_HEADER include/spdk/nvme.h 00:04:55.333 TEST_HEADER include/spdk/nvme_intel.h 00:04:55.333 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:55.333 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:55.333 TEST_HEADER include/spdk/nvme_spec.h 00:04:55.333 TEST_HEADER include/spdk/nvme_zns.h 00:04:55.333 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:55.333 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:55.333 TEST_HEADER include/spdk/nvmf.h 00:04:55.333 TEST_HEADER include/spdk/nvmf_spec.h 00:04:55.333 TEST_HEADER include/spdk/nvmf_transport.h 00:04:55.333 TEST_HEADER include/spdk/opal.h 00:04:55.333 TEST_HEADER include/spdk/opal_spec.h 00:04:55.333 TEST_HEADER include/spdk/pci_ids.h 00:04:55.333 TEST_HEADER include/spdk/pipe.h 00:04:55.333 TEST_HEADER include/spdk/queue.h 00:04:55.333 TEST_HEADER include/spdk/reduce.h 00:04:55.333 TEST_HEADER include/spdk/rpc.h 00:04:55.333 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:55.333 TEST_HEADER include/spdk/scheduler.h 00:04:55.333 TEST_HEADER include/spdk/scsi.h 00:04:55.333 TEST_HEADER include/spdk/scsi_spec.h 00:04:55.333 TEST_HEADER include/spdk/sock.h 00:04:55.333 TEST_HEADER include/spdk/stdinc.h 00:04:55.333 CC test/dma/test_dma/test_dma.o 00:04:55.333 TEST_HEADER include/spdk/string.h 00:04:55.333 TEST_HEADER include/spdk/thread.h 00:04:55.333 TEST_HEADER include/spdk/trace.h 00:04:55.333 TEST_HEADER include/spdk/trace_parser.h 00:04:55.333 TEST_HEADER include/spdk/tree.h 00:04:55.333 TEST_HEADER include/spdk/ublk.h 00:04:55.333 TEST_HEADER include/spdk/util.h 00:04:55.333 TEST_HEADER include/spdk/uuid.h 00:04:55.333 TEST_HEADER include/spdk/version.h 00:04:55.333 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:55.333 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:55.333 TEST_HEADER include/spdk/vhost.h 00:04:55.333 TEST_HEADER include/spdk/vmd.h 00:04:55.593 TEST_HEADER include/spdk/xor.h 00:04:55.593 TEST_HEADER include/spdk/zipf.h 00:04:55.593 CXX test/cpp_headers/accel.o 00:04:55.593 CXX test/cpp_headers/accel_module.o 00:04:55.593 CC test/env/mem_callbacks/mem_callbacks.o 00:04:55.593 CXX test/cpp_headers/assert.o 00:04:55.593 CC app/spdk_tgt/spdk_tgt.o 00:04:55.593 CC test/event/event_perf/event_perf.o 00:04:55.852 LINK mem_callbacks 00:04:55.852 CXX test/cpp_headers/barrier.o 00:04:55.852 LINK event_perf 00:04:55.852 CC test/lvol/esnap/esnap.o 00:04:55.852 CC test/app/histogram_perf/histogram_perf.o 00:04:55.852 CC test/app/jsoncat/jsoncat.o 00:04:55.852 LINK spdk_tgt 00:04:55.852 LINK nvme_fuzz 00:04:55.852 LINK test_dma 00:04:55.852 LINK jsoncat 00:04:55.852 CC test/env/vtophys/vtophys.o 00:04:55.852 CXX test/cpp_headers/base64.o 00:04:55.852 CC test/event/reactor/reactor.o 00:04:55.852 LINK histogram_perf 00:04:56.111 LINK bdevperf 00:04:56.111 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:56.111 CC app/spdk_lspci/spdk_lspci.o 00:04:56.111 LINK vtophys 00:04:56.111 CXX test/cpp_headers/bdev.o 00:04:56.111 LINK reactor 00:04:56.111 CC app/spdk_nvme_perf/perf.o 00:04:56.111 CC app/spdk_nvme_discover/discovery_aer.o 00:04:56.111 CC app/spdk_nvme_identify/identify.o 00:04:56.111 LINK spdk_lspci 00:04:56.376 CXX test/cpp_headers/bdev_module.o 00:04:56.376 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:56.376 CC test/event/reactor_perf/reactor_perf.o 00:04:56.376 LINK spdk_nvme_discover 00:04:56.376 CC examples/blob/hello_world/hello_blob.o 00:04:56.376 CC examples/blob/cli/blobcli.o 00:04:56.376 LINK env_dpdk_post_init 00:04:56.376 LINK reactor_perf 00:04:56.376 CXX test/cpp_headers/bdev_zone.o 00:04:56.635 CXX test/cpp_headers/bit_array.o 00:04:56.635 LINK hello_blob 00:04:56.635 CC test/env/memory/memory_ut.o 00:04:56.635 CC test/event/app_repeat/app_repeat.o 00:04:56.635 CXX test/cpp_headers/bit_pool.o 00:04:56.635 CC app/spdk_top/spdk_top.o 00:04:56.894 LINK app_repeat 00:04:56.894 CXX test/cpp_headers/blob_bdev.o 00:04:56.894 LINK blobcli 00:04:56.894 CC test/event/scheduler/scheduler.o 00:04:57.153 CXX test/cpp_headers/blobfs_bdev.o 00:04:57.153 LINK spdk_nvme_identify 00:04:57.153 LINK spdk_nvme_perf 00:04:57.153 LINK memory_ut 00:04:57.153 CC app/vhost/vhost.o 00:04:57.153 CXX test/cpp_headers/blobfs.o 00:04:57.153 CC examples/ioat/perf/perf.o 00:04:57.153 LINK scheduler 00:04:57.153 CC examples/ioat/verify/verify.o 00:04:57.153 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:57.412 LINK vhost 00:04:57.412 CC test/env/pci/pci_ut.o 00:04:57.412 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:57.412 CXX test/cpp_headers/blob.o 00:04:57.412 CXX test/cpp_headers/conf.o 00:04:57.412 LINK ioat_perf 00:04:57.412 LINK verify 00:04:57.670 CXX test/cpp_headers/config.o 00:04:57.670 LINK spdk_top 00:04:57.670 CXX test/cpp_headers/cpuset.o 00:04:57.670 CC test/app/stub/stub.o 00:04:57.670 CC app/spdk_dd/spdk_dd.o 00:04:57.670 LINK iscsi_fuzz 00:04:57.670 CC test/nvme/aer/aer.o 00:04:57.671 CC examples/nvme/hello_world/hello_world.o 00:04:57.671 CXX test/cpp_headers/crc16.o 00:04:57.929 LINK pci_ut 00:04:57.929 LINK vhost_fuzz 00:04:57.929 LINK stub 00:04:57.929 CC app/fio/nvme/fio_plugin.o 00:04:57.929 CXX test/cpp_headers/crc32.o 00:04:57.929 CXX test/cpp_headers/crc64.o 00:04:57.929 CXX test/cpp_headers/dif.o 00:04:57.929 LINK hello_world 00:04:57.929 CXX test/cpp_headers/dma.o 00:04:57.929 LINK aer 00:04:58.188 CXX test/cpp_headers/endian.o 00:04:58.188 LINK spdk_dd 00:04:58.188 CXX test/cpp_headers/env_dpdk.o 00:04:58.188 CXX test/cpp_headers/env.o 00:04:58.188 CC app/fio/bdev/fio_plugin.o 00:04:58.188 CC examples/nvme/reconnect/reconnect.o 00:04:58.188 CC examples/sock/hello_world/hello_sock.o 00:04:58.188 CC test/nvme/reset/reset.o 00:04:58.446 CXX test/cpp_headers/event.o 00:04:58.446 CC examples/vmd/lsvmd/lsvmd.o 00:04:58.446 CC examples/nvmf/nvmf/nvmf.o 00:04:58.446 CC examples/util/zipf/zipf.o 00:04:58.446 LINK spdk_nvme 00:04:58.446 LINK hello_sock 00:04:58.446 LINK reset 00:04:58.446 CXX test/cpp_headers/fd_group.o 00:04:58.446 LINK lsvmd 00:04:58.703 LINK reconnect 00:04:58.703 LINK zipf 00:04:58.703 CC test/nvme/sgl/sgl.o 00:04:58.703 CC examples/thread/thread/thread_ex.o 00:04:58.703 CXX test/cpp_headers/fd.o 00:04:58.703 LINK spdk_bdev 00:04:58.703 LINK nvmf 00:04:58.703 CC examples/vmd/led/led.o 00:04:58.703 CC test/nvme/e2edp/nvme_dp.o 00:04:58.703 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:58.962 CC examples/idxd/perf/perf.o 00:04:58.962 CXX test/cpp_headers/file.o 00:04:58.962 LINK led 00:04:58.962 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:58.962 CXX test/cpp_headers/ftl.o 00:04:58.962 LINK thread 00:04:58.962 LINK sgl 00:04:58.962 LINK nvme_dp 00:04:59.221 CC test/nvme/overhead/overhead.o 00:04:59.221 LINK interrupt_tgt 00:04:59.221 CC test/nvme/err_injection/err_injection.o 00:04:59.221 CXX test/cpp_headers/gpt_spec.o 00:04:59.221 CXX test/cpp_headers/hexlify.o 00:04:59.221 CC test/rpc_client/rpc_client_test.o 00:04:59.221 CXX test/cpp_headers/histogram_data.o 00:04:59.221 LINK idxd_perf 00:04:59.221 LINK nvme_manage 00:04:59.479 LINK err_injection 00:04:59.479 LINK overhead 00:04:59.479 CXX test/cpp_headers/idxd.o 00:04:59.479 CC examples/nvme/arbitration/arbitration.o 00:04:59.479 CC test/nvme/startup/startup.o 00:04:59.479 LINK rpc_client_test 00:04:59.479 CC test/nvme/reserve/reserve.o 00:04:59.479 CC test/thread/poller_perf/poller_perf.o 00:04:59.479 CC examples/nvme/hotplug/hotplug.o 00:04:59.479 CC test/nvme/simple_copy/simple_copy.o 00:04:59.479 CXX test/cpp_headers/idxd_spec.o 00:04:59.737 LINK startup 00:04:59.737 CC test/nvme/connect_stress/connect_stress.o 00:04:59.737 LINK poller_perf 00:04:59.737 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:59.737 LINK reserve 00:04:59.737 CXX test/cpp_headers/init.o 00:04:59.737 LINK arbitration 00:04:59.737 LINK hotplug 00:04:59.737 LINK simple_copy 00:04:59.737 CC test/nvme/boot_partition/boot_partition.o 00:04:59.737 CXX test/cpp_headers/ioat.o 00:04:59.737 CC test/nvme/compliance/nvme_compliance.o 00:04:59.995 LINK cmb_copy 00:04:59.995 LINK connect_stress 00:04:59.995 CC examples/nvme/abort/abort.o 00:04:59.995 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:59.995 CXX test/cpp_headers/ioat_spec.o 00:04:59.995 CC test/nvme/fused_ordering/fused_ordering.o 00:04:59.995 CXX test/cpp_headers/iscsi_spec.o 00:04:59.995 LINK boot_partition 00:04:59.995 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:00.254 CC test/nvme/fdp/fdp.o 00:05:00.254 CXX test/cpp_headers/json.o 00:05:00.254 LINK nvme_compliance 00:05:00.254 LINK pmr_persistence 00:05:00.254 CXX test/cpp_headers/jsonrpc.o 00:05:00.254 LINK fused_ordering 00:05:00.254 LINK doorbell_aers 00:05:00.254 CC test/nvme/cuse/cuse.o 00:05:00.254 CXX test/cpp_headers/likely.o 00:05:00.254 CXX test/cpp_headers/log.o 00:05:00.254 CXX test/cpp_headers/lvol.o 00:05:00.254 LINK abort 00:05:00.513 CXX test/cpp_headers/memory.o 00:05:00.513 CXX test/cpp_headers/mmio.o 00:05:00.513 CXX test/cpp_headers/nbd.o 00:05:00.513 CXX test/cpp_headers/notify.o 00:05:00.513 LINK fdp 00:05:00.513 CXX test/cpp_headers/nvme.o 00:05:00.513 CXX test/cpp_headers/nvme_intel.o 00:05:00.513 CXX test/cpp_headers/nvme_ocssd.o 00:05:00.513 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:00.513 CXX test/cpp_headers/nvme_spec.o 00:05:00.513 CXX test/cpp_headers/nvme_zns.o 00:05:00.513 CXX test/cpp_headers/nvmf_cmd.o 00:05:00.513 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:00.771 CXX test/cpp_headers/nvmf.o 00:05:00.771 CXX test/cpp_headers/nvmf_spec.o 00:05:00.771 CXX test/cpp_headers/nvmf_transport.o 00:05:00.771 CXX test/cpp_headers/opal.o 00:05:00.771 LINK esnap 00:05:00.771 CXX test/cpp_headers/opal_spec.o 00:05:00.771 CXX test/cpp_headers/pci_ids.o 00:05:00.771 CXX test/cpp_headers/pipe.o 00:05:00.771 CXX test/cpp_headers/queue.o 00:05:00.771 CXX test/cpp_headers/reduce.o 00:05:00.771 CXX test/cpp_headers/rpc.o 00:05:00.771 CXX test/cpp_headers/scheduler.o 00:05:00.771 CXX test/cpp_headers/scsi.o 00:05:01.030 CXX test/cpp_headers/scsi_spec.o 00:05:01.030 CXX test/cpp_headers/sock.o 00:05:01.030 CXX test/cpp_headers/stdinc.o 00:05:01.030 CXX test/cpp_headers/string.o 00:05:01.030 CXX test/cpp_headers/thread.o 00:05:01.030 CXX test/cpp_headers/trace.o 00:05:01.030 CXX test/cpp_headers/trace_parser.o 00:05:01.030 CXX test/cpp_headers/tree.o 00:05:01.030 CXX test/cpp_headers/ublk.o 00:05:01.030 CXX test/cpp_headers/util.o 00:05:01.030 CXX test/cpp_headers/uuid.o 00:05:01.030 CXX test/cpp_headers/version.o 00:05:01.030 CXX test/cpp_headers/vfio_user_pci.o 00:05:01.030 CXX test/cpp_headers/vfio_user_spec.o 00:05:01.289 CXX test/cpp_headers/vhost.o 00:05:01.289 CXX test/cpp_headers/vmd.o 00:05:01.289 CXX test/cpp_headers/xor.o 00:05:01.289 CXX test/cpp_headers/zipf.o 00:05:01.289 LINK cuse 00:05:01.547 00:05:01.547 real 0m49.420s 00:05:01.547 user 4m51.558s 00:05:01.547 sys 0m56.434s 00:05:01.547 14:55:32 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:05:01.547 14:55:32 -- common/autotest_common.sh@10 -- $ set +x 00:05:01.547 ************************************ 00:05:01.547 END TEST make 00:05:01.547 ************************************ 00:05:01.547 14:55:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:01.547 14:55:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:01.547 14:55:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:01.547 14:55:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:01.547 14:55:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:01.547 14:55:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:01.547 14:55:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:01.547 14:55:32 -- scripts/common.sh@335 -- # IFS=.-: 00:05:01.547 14:55:32 -- scripts/common.sh@335 -- # read -ra ver1 00:05:01.547 14:55:32 -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.547 14:55:32 -- scripts/common.sh@336 -- # read -ra ver2 00:05:01.547 14:55:32 -- scripts/common.sh@337 -- # local 'op=<' 00:05:01.547 14:55:32 -- scripts/common.sh@339 -- # ver1_l=2 00:05:01.547 14:55:32 -- scripts/common.sh@340 -- # ver2_l=1 00:05:01.547 14:55:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:01.547 14:55:32 -- scripts/common.sh@343 -- # case "$op" in 00:05:01.547 14:55:32 -- scripts/common.sh@344 -- # : 1 00:05:01.547 14:55:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:01.547 14:55:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.547 14:55:32 -- scripts/common.sh@364 -- # decimal 1 00:05:01.547 14:55:32 -- scripts/common.sh@352 -- # local d=1 00:05:01.547 14:55:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.547 14:55:32 -- scripts/common.sh@354 -- # echo 1 00:05:01.547 14:55:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:01.547 14:55:32 -- scripts/common.sh@365 -- # decimal 2 00:05:01.547 14:55:32 -- scripts/common.sh@352 -- # local d=2 00:05:01.547 14:55:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.547 14:55:32 -- scripts/common.sh@354 -- # echo 2 00:05:01.547 14:55:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:01.547 14:55:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:01.548 14:55:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:01.548 14:55:32 -- scripts/common.sh@367 -- # return 0 00:05:01.548 14:55:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.548 14:55:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:01.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.548 --rc genhtml_branch_coverage=1 00:05:01.548 --rc genhtml_function_coverage=1 00:05:01.548 --rc genhtml_legend=1 00:05:01.548 --rc geninfo_all_blocks=1 00:05:01.548 --rc geninfo_unexecuted_blocks=1 00:05:01.548 00:05:01.548 ' 00:05:01.548 14:55:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:01.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.548 --rc genhtml_branch_coverage=1 00:05:01.548 --rc genhtml_function_coverage=1 00:05:01.548 --rc genhtml_legend=1 00:05:01.548 --rc geninfo_all_blocks=1 00:05:01.548 --rc geninfo_unexecuted_blocks=1 00:05:01.548 00:05:01.548 ' 00:05:01.548 14:55:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:01.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.548 --rc genhtml_branch_coverage=1 00:05:01.548 --rc genhtml_function_coverage=1 00:05:01.548 --rc genhtml_legend=1 00:05:01.548 --rc geninfo_all_blocks=1 00:05:01.548 --rc geninfo_unexecuted_blocks=1 00:05:01.548 00:05:01.548 ' 00:05:01.548 14:55:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:01.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.548 --rc genhtml_branch_coverage=1 00:05:01.548 --rc genhtml_function_coverage=1 00:05:01.548 --rc genhtml_legend=1 00:05:01.548 --rc geninfo_all_blocks=1 00:05:01.548 --rc geninfo_unexecuted_blocks=1 00:05:01.548 00:05:01.548 ' 00:05:01.548 14:55:32 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:01.548 14:55:32 -- nvmf/common.sh@7 -- # uname -s 00:05:01.548 14:55:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:01.548 14:55:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:01.548 14:55:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:01.548 14:55:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:01.807 14:55:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:01.807 14:55:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:01.807 14:55:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:01.807 14:55:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:01.807 14:55:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:01.807 14:55:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:01.807 14:55:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:05:01.807 14:55:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:05:01.807 14:55:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:01.807 14:55:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:01.807 14:55:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:01.807 14:55:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:01.807 14:55:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:01.807 14:55:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:01.807 14:55:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:01.807 14:55:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.807 14:55:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.807 14:55:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.807 14:55:32 -- paths/export.sh@5 -- # export PATH 00:05:01.807 14:55:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:01.807 14:55:32 -- nvmf/common.sh@46 -- # : 0 00:05:01.807 14:55:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:01.807 14:55:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:01.807 14:55:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:01.807 14:55:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:01.807 14:55:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:01.807 14:55:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:01.807 14:55:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:01.807 14:55:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:01.807 14:55:32 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:01.807 14:55:32 -- spdk/autotest.sh@32 -- # uname -s 00:05:01.807 14:55:32 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:01.807 14:55:32 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:01.807 14:55:32 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:01.807 14:55:32 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:01.807 14:55:32 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:01.807 14:55:32 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:01.807 14:55:32 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:01.807 14:55:32 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:01.807 14:55:32 -- spdk/autotest.sh@48 -- # udevadm_pid=59806 00:05:01.807 14:55:32 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:01.807 14:55:32 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:05:01.807 14:55:32 -- spdk/autotest.sh@54 -- # echo 59809 00:05:01.807 14:55:32 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:05:01.807 14:55:32 -- spdk/autotest.sh@56 -- # echo 59814 00:05:01.807 14:55:32 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:05:01.807 14:55:32 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:05:01.807 14:55:32 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:01.807 14:55:32 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:05:01.807 14:55:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:01.807 14:55:32 -- common/autotest_common.sh@10 -- # set +x 00:05:01.807 14:55:32 -- spdk/autotest.sh@70 -- # create_test_list 00:05:01.807 14:55:32 -- common/autotest_common.sh@746 -- # xtrace_disable 00:05:01.807 14:55:32 -- common/autotest_common.sh@10 -- # set +x 00:05:01.807 14:55:32 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:01.807 14:55:32 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:01.807 14:55:32 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:05:01.807 14:55:32 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:01.807 14:55:32 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:05:01.807 14:55:32 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:05:01.807 14:55:32 -- common/autotest_common.sh@1450 -- # uname 00:05:01.807 14:55:32 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:05:01.807 14:55:32 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:05:01.807 14:55:32 -- common/autotest_common.sh@1470 -- # uname 00:05:01.807 14:55:32 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:05:01.807 14:55:32 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:05:01.808 14:55:32 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:01.808 lcov: LCOV version 1.15 00:05:01.808 14:55:32 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:09.986 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:09.986 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:09.986 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:09.986 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:09.986 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:09.986 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:31.919 14:56:02 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:05:31.919 14:56:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:31.919 14:56:02 -- common/autotest_common.sh@10 -- # set +x 00:05:31.919 14:56:02 -- spdk/autotest.sh@89 -- # rm -f 00:05:31.919 14:56:02 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:32.178 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:32.178 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:32.178 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:05:32.178 14:56:02 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:05:32.178 14:56:02 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:32.178 14:56:02 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:32.178 14:56:02 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:32.178 14:56:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:32.178 14:56:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:32.178 14:56:02 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:32.178 14:56:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:32.178 14:56:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:32.178 14:56:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:32.178 14:56:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:32.178 14:56:02 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:32.178 14:56:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:32.178 14:56:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:32.178 14:56:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:32.178 14:56:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:32.178 14:56:02 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:32.178 14:56:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:32.178 14:56:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:32.178 14:56:02 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:32.178 14:56:02 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:32.178 14:56:02 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:32.178 14:56:02 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:32.178 14:56:02 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:32.178 14:56:02 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:05:32.178 14:56:02 -- spdk/autotest.sh@108 -- # grep -v p 00:05:32.178 14:56:02 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:05:32.178 14:56:02 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:32.178 14:56:02 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:32.178 14:56:02 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:05:32.178 14:56:02 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:32.178 14:56:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:32.437 No valid GPT data, bailing 00:05:32.437 14:56:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:32.437 14:56:03 -- scripts/common.sh@393 -- # pt= 00:05:32.437 14:56:03 -- scripts/common.sh@394 -- # return 1 00:05:32.437 14:56:03 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:32.437 1+0 records in 00:05:32.437 1+0 records out 00:05:32.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450422 s, 233 MB/s 00:05:32.437 14:56:03 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:32.437 14:56:03 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:32.437 14:56:03 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:05:32.437 14:56:03 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:05:32.437 14:56:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:32.437 No valid GPT data, bailing 00:05:32.437 14:56:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:32.437 14:56:03 -- scripts/common.sh@393 -- # pt= 00:05:32.437 14:56:03 -- scripts/common.sh@394 -- # return 1 00:05:32.437 14:56:03 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:32.437 1+0 records in 00:05:32.437 1+0 records out 00:05:32.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00445779 s, 235 MB/s 00:05:32.437 14:56:03 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:32.437 14:56:03 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:32.437 14:56:03 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n2 00:05:32.437 14:56:03 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:05:32.437 14:56:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:32.437 No valid GPT data, bailing 00:05:32.437 14:56:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:32.437 14:56:03 -- scripts/common.sh@393 -- # pt= 00:05:32.437 14:56:03 -- scripts/common.sh@394 -- # return 1 00:05:32.437 14:56:03 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:32.437 1+0 records in 00:05:32.437 1+0 records out 00:05:32.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00494329 s, 212 MB/s 00:05:32.437 14:56:03 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:32.437 14:56:03 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:05:32.437 14:56:03 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n3 00:05:32.437 14:56:03 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:05:32.437 14:56:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:32.695 No valid GPT data, bailing 00:05:32.695 14:56:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:32.695 14:56:03 -- scripts/common.sh@393 -- # pt= 00:05:32.695 14:56:03 -- scripts/common.sh@394 -- # return 1 00:05:32.695 14:56:03 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:32.695 1+0 records in 00:05:32.695 1+0 records out 00:05:32.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00471634 s, 222 MB/s 00:05:32.695 14:56:03 -- spdk/autotest.sh@116 -- # sync 00:05:32.695 14:56:03 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:32.695 14:56:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:32.695 14:56:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:34.597 14:56:05 -- spdk/autotest.sh@122 -- # uname -s 00:05:34.597 14:56:05 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:05:34.597 14:56:05 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:34.597 14:56:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.597 14:56:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.597 14:56:05 -- common/autotest_common.sh@10 -- # set +x 00:05:34.597 ************************************ 00:05:34.597 START TEST setup.sh 00:05:34.597 ************************************ 00:05:34.597 14:56:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:34.597 * Looking for test storage... 00:05:34.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:34.598 14:56:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:34.598 14:56:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:34.598 14:56:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:34.856 14:56:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:34.856 14:56:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:34.856 14:56:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:34.856 14:56:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:34.856 14:56:05 -- scripts/common.sh@335 -- # IFS=.-: 00:05:34.856 14:56:05 -- scripts/common.sh@335 -- # read -ra ver1 00:05:34.856 14:56:05 -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.857 14:56:05 -- scripts/common.sh@336 -- # read -ra ver2 00:05:34.857 14:56:05 -- scripts/common.sh@337 -- # local 'op=<' 00:05:34.857 14:56:05 -- scripts/common.sh@339 -- # ver1_l=2 00:05:34.857 14:56:05 -- scripts/common.sh@340 -- # ver2_l=1 00:05:34.857 14:56:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:34.857 14:56:05 -- scripts/common.sh@343 -- # case "$op" in 00:05:34.857 14:56:05 -- scripts/common.sh@344 -- # : 1 00:05:34.857 14:56:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:34.857 14:56:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.857 14:56:05 -- scripts/common.sh@364 -- # decimal 1 00:05:34.857 14:56:05 -- scripts/common.sh@352 -- # local d=1 00:05:34.857 14:56:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.857 14:56:05 -- scripts/common.sh@354 -- # echo 1 00:05:34.857 14:56:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:34.857 14:56:05 -- scripts/common.sh@365 -- # decimal 2 00:05:34.857 14:56:05 -- scripts/common.sh@352 -- # local d=2 00:05:34.857 14:56:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.857 14:56:05 -- scripts/common.sh@354 -- # echo 2 00:05:34.857 14:56:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:34.857 14:56:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:34.857 14:56:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:34.857 14:56:05 -- scripts/common.sh@367 -- # return 0 00:05:34.857 14:56:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.857 14:56:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.857 --rc genhtml_branch_coverage=1 00:05:34.857 --rc genhtml_function_coverage=1 00:05:34.857 --rc genhtml_legend=1 00:05:34.857 --rc geninfo_all_blocks=1 00:05:34.857 --rc geninfo_unexecuted_blocks=1 00:05:34.857 00:05:34.857 ' 00:05:34.857 14:56:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.857 --rc genhtml_branch_coverage=1 00:05:34.857 --rc genhtml_function_coverage=1 00:05:34.857 --rc genhtml_legend=1 00:05:34.857 --rc geninfo_all_blocks=1 00:05:34.857 --rc geninfo_unexecuted_blocks=1 00:05:34.857 00:05:34.857 ' 00:05:34.857 14:56:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.857 --rc genhtml_branch_coverage=1 00:05:34.857 --rc genhtml_function_coverage=1 00:05:34.857 --rc genhtml_legend=1 00:05:34.857 --rc geninfo_all_blocks=1 00:05:34.857 --rc geninfo_unexecuted_blocks=1 00:05:34.857 00:05:34.857 ' 00:05:34.857 14:56:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.857 --rc genhtml_branch_coverage=1 00:05:34.857 --rc genhtml_function_coverage=1 00:05:34.857 --rc genhtml_legend=1 00:05:34.857 --rc geninfo_all_blocks=1 00:05:34.857 --rc geninfo_unexecuted_blocks=1 00:05:34.857 00:05:34.857 ' 00:05:34.857 14:56:05 -- setup/test-setup.sh@10 -- # uname -s 00:05:34.857 14:56:05 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:34.857 14:56:05 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:34.857 14:56:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.857 14:56:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.857 14:56:05 -- common/autotest_common.sh@10 -- # set +x 00:05:34.857 ************************************ 00:05:34.857 START TEST acl 00:05:34.857 ************************************ 00:05:34.857 14:56:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:34.857 * Looking for test storage... 00:05:34.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:34.857 14:56:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:34.857 14:56:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:34.857 14:56:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:34.857 14:56:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:34.857 14:56:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:34.857 14:56:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:34.857 14:56:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:34.857 14:56:05 -- scripts/common.sh@335 -- # IFS=.-: 00:05:34.857 14:56:05 -- scripts/common.sh@335 -- # read -ra ver1 00:05:34.857 14:56:05 -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.857 14:56:05 -- scripts/common.sh@336 -- # read -ra ver2 00:05:34.857 14:56:05 -- scripts/common.sh@337 -- # local 'op=<' 00:05:34.857 14:56:05 -- scripts/common.sh@339 -- # ver1_l=2 00:05:34.857 14:56:05 -- scripts/common.sh@340 -- # ver2_l=1 00:05:34.857 14:56:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:34.857 14:56:05 -- scripts/common.sh@343 -- # case "$op" in 00:05:34.857 14:56:05 -- scripts/common.sh@344 -- # : 1 00:05:34.857 14:56:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:34.857 14:56:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.857 14:56:05 -- scripts/common.sh@364 -- # decimal 1 00:05:34.857 14:56:05 -- scripts/common.sh@352 -- # local d=1 00:05:34.857 14:56:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.857 14:56:05 -- scripts/common.sh@354 -- # echo 1 00:05:34.857 14:56:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:34.857 14:56:05 -- scripts/common.sh@365 -- # decimal 2 00:05:34.857 14:56:05 -- scripts/common.sh@352 -- # local d=2 00:05:34.857 14:56:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.857 14:56:05 -- scripts/common.sh@354 -- # echo 2 00:05:34.857 14:56:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:34.857 14:56:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:34.857 14:56:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:34.857 14:56:05 -- scripts/common.sh@367 -- # return 0 00:05:34.857 14:56:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.857 14:56:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.857 --rc genhtml_branch_coverage=1 00:05:34.857 --rc genhtml_function_coverage=1 00:05:34.857 --rc genhtml_legend=1 00:05:34.857 --rc geninfo_all_blocks=1 00:05:34.857 --rc geninfo_unexecuted_blocks=1 00:05:34.857 00:05:34.857 ' 00:05:34.857 14:56:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.857 --rc genhtml_branch_coverage=1 00:05:34.857 --rc genhtml_function_coverage=1 00:05:34.857 --rc genhtml_legend=1 00:05:34.857 --rc geninfo_all_blocks=1 00:05:34.857 --rc geninfo_unexecuted_blocks=1 00:05:34.857 00:05:34.857 ' 00:05:34.857 14:56:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.857 --rc genhtml_branch_coverage=1 00:05:34.857 --rc genhtml_function_coverage=1 00:05:34.857 --rc genhtml_legend=1 00:05:34.857 --rc geninfo_all_blocks=1 00:05:34.857 --rc geninfo_unexecuted_blocks=1 00:05:34.857 00:05:34.857 ' 00:05:34.857 14:56:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:34.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.857 --rc genhtml_branch_coverage=1 00:05:34.857 --rc genhtml_function_coverage=1 00:05:34.857 --rc genhtml_legend=1 00:05:34.857 --rc geninfo_all_blocks=1 00:05:34.857 --rc geninfo_unexecuted_blocks=1 00:05:34.857 00:05:34.857 ' 00:05:34.857 14:56:05 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:34.858 14:56:05 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:34.858 14:56:05 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:34.858 14:56:05 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:34.858 14:56:05 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:34.858 14:56:05 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:34.858 14:56:05 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:34.858 14:56:05 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:34.858 14:56:05 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:34.858 14:56:05 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:34.858 14:56:05 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:34.858 14:56:05 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:34.858 14:56:05 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:34.858 14:56:05 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:34.858 14:56:05 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:34.858 14:56:05 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:34.858 14:56:05 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:34.858 14:56:05 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:34.858 14:56:05 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:34.858 14:56:05 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:34.858 14:56:05 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:34.858 14:56:05 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:34.858 14:56:05 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:34.858 14:56:05 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:34.858 14:56:05 -- setup/acl.sh@12 -- # devs=() 00:05:34.858 14:56:05 -- setup/acl.sh@12 -- # declare -a devs 00:05:34.858 14:56:05 -- setup/acl.sh@13 -- # drivers=() 00:05:34.858 14:56:05 -- setup/acl.sh@13 -- # declare -A drivers 00:05:34.858 14:56:05 -- setup/acl.sh@51 -- # setup reset 00:05:34.858 14:56:05 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:34.858 14:56:05 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:35.791 14:56:06 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:35.791 14:56:06 -- setup/acl.sh@16 -- # local dev driver 00:05:35.791 14:56:06 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:35.791 14:56:06 -- setup/acl.sh@15 -- # setup output status 00:05:35.791 14:56:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:35.792 14:56:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:35.792 Hugepages 00:05:35.792 node hugesize free / total 00:05:35.792 14:56:06 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:35.792 14:56:06 -- setup/acl.sh@19 -- # continue 00:05:35.792 14:56:06 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:35.792 00:05:35.792 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:35.792 14:56:06 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:35.792 14:56:06 -- setup/acl.sh@19 -- # continue 00:05:35.792 14:56:06 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.050 14:56:06 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:36.050 14:56:06 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:36.050 14:56:06 -- setup/acl.sh@20 -- # continue 00:05:36.050 14:56:06 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.050 14:56:06 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:36.050 14:56:06 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:36.050 14:56:06 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:36.050 14:56:06 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:36.050 14:56:06 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:36.050 14:56:06 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.050 14:56:06 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:05:36.050 14:56:06 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:36.050 14:56:06 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:36.050 14:56:06 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:36.050 14:56:06 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:36.050 14:56:06 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:36.050 14:56:06 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:36.050 14:56:06 -- setup/acl.sh@54 -- # run_test denied denied 00:05:36.050 14:56:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.050 14:56:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.050 14:56:06 -- common/autotest_common.sh@10 -- # set +x 00:05:36.050 ************************************ 00:05:36.050 START TEST denied 00:05:36.050 ************************************ 00:05:36.050 14:56:06 -- common/autotest_common.sh@1114 -- # denied 00:05:36.050 14:56:06 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:36.050 14:56:06 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:36.050 14:56:06 -- setup/acl.sh@38 -- # setup output config 00:05:36.050 14:56:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.050 14:56:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:36.987 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:36.987 14:56:07 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:36.987 14:56:07 -- setup/acl.sh@28 -- # local dev driver 00:05:36.987 14:56:07 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:36.987 14:56:07 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:36.987 14:56:07 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:36.987 14:56:07 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:36.987 14:56:07 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:36.987 14:56:07 -- setup/acl.sh@41 -- # setup reset 00:05:36.987 14:56:07 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:36.987 14:56:07 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:37.553 00:05:37.553 real 0m1.464s 00:05:37.553 user 0m0.583s 00:05:37.553 sys 0m0.823s 00:05:37.553 14:56:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:37.553 14:56:08 -- common/autotest_common.sh@10 -- # set +x 00:05:37.553 ************************************ 00:05:37.553 END TEST denied 00:05:37.553 ************************************ 00:05:37.553 14:56:08 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:37.553 14:56:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.553 14:56:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.553 14:56:08 -- common/autotest_common.sh@10 -- # set +x 00:05:37.553 ************************************ 00:05:37.553 START TEST allowed 00:05:37.553 ************************************ 00:05:37.553 14:56:08 -- common/autotest_common.sh@1114 -- # allowed 00:05:37.553 14:56:08 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:37.553 14:56:08 -- setup/acl.sh@45 -- # setup output config 00:05:37.553 14:56:08 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:37.553 14:56:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.553 14:56:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:38.490 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.490 14:56:09 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:05:38.490 14:56:09 -- setup/acl.sh@28 -- # local dev driver 00:05:38.490 14:56:09 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:38.490 14:56:09 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:05:38.490 14:56:09 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:05:38.490 14:56:09 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:38.490 14:56:09 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:38.490 14:56:09 -- setup/acl.sh@48 -- # setup reset 00:05:38.490 14:56:09 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:38.490 14:56:09 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:39.060 00:05:39.060 real 0m1.492s 00:05:39.060 user 0m0.687s 00:05:39.060 sys 0m0.809s 00:05:39.060 14:56:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.060 ************************************ 00:05:39.060 END TEST allowed 00:05:39.060 ************************************ 00:05:39.060 14:56:09 -- common/autotest_common.sh@10 -- # set +x 00:05:39.060 ************************************ 00:05:39.060 END TEST acl 00:05:39.060 ************************************ 00:05:39.060 00:05:39.060 real 0m4.341s 00:05:39.060 user 0m1.948s 00:05:39.060 sys 0m2.366s 00:05:39.060 14:56:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.060 14:56:09 -- common/autotest_common.sh@10 -- # set +x 00:05:39.060 14:56:09 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:39.060 14:56:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.061 14:56:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.061 14:56:09 -- common/autotest_common.sh@10 -- # set +x 00:05:39.061 ************************************ 00:05:39.061 START TEST hugepages 00:05:39.061 ************************************ 00:05:39.061 14:56:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:39.322 * Looking for test storage... 00:05:39.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:39.322 14:56:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:39.322 14:56:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:39.322 14:56:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:39.322 14:56:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:39.322 14:56:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:39.322 14:56:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:39.322 14:56:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:39.322 14:56:10 -- scripts/common.sh@335 -- # IFS=.-: 00:05:39.322 14:56:10 -- scripts/common.sh@335 -- # read -ra ver1 00:05:39.322 14:56:10 -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.322 14:56:10 -- scripts/common.sh@336 -- # read -ra ver2 00:05:39.322 14:56:10 -- scripts/common.sh@337 -- # local 'op=<' 00:05:39.322 14:56:10 -- scripts/common.sh@339 -- # ver1_l=2 00:05:39.322 14:56:10 -- scripts/common.sh@340 -- # ver2_l=1 00:05:39.322 14:56:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:39.322 14:56:10 -- scripts/common.sh@343 -- # case "$op" in 00:05:39.322 14:56:10 -- scripts/common.sh@344 -- # : 1 00:05:39.322 14:56:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:39.322 14:56:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.322 14:56:10 -- scripts/common.sh@364 -- # decimal 1 00:05:39.322 14:56:10 -- scripts/common.sh@352 -- # local d=1 00:05:39.322 14:56:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.322 14:56:10 -- scripts/common.sh@354 -- # echo 1 00:05:39.322 14:56:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:39.322 14:56:10 -- scripts/common.sh@365 -- # decimal 2 00:05:39.322 14:56:10 -- scripts/common.sh@352 -- # local d=2 00:05:39.322 14:56:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.322 14:56:10 -- scripts/common.sh@354 -- # echo 2 00:05:39.322 14:56:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:39.322 14:56:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:39.322 14:56:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:39.322 14:56:10 -- scripts/common.sh@367 -- # return 0 00:05:39.322 14:56:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.322 14:56:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:39.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.322 --rc genhtml_branch_coverage=1 00:05:39.322 --rc genhtml_function_coverage=1 00:05:39.322 --rc genhtml_legend=1 00:05:39.322 --rc geninfo_all_blocks=1 00:05:39.322 --rc geninfo_unexecuted_blocks=1 00:05:39.322 00:05:39.322 ' 00:05:39.322 14:56:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:39.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.322 --rc genhtml_branch_coverage=1 00:05:39.322 --rc genhtml_function_coverage=1 00:05:39.322 --rc genhtml_legend=1 00:05:39.322 --rc geninfo_all_blocks=1 00:05:39.322 --rc geninfo_unexecuted_blocks=1 00:05:39.322 00:05:39.322 ' 00:05:39.322 14:56:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:39.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.322 --rc genhtml_branch_coverage=1 00:05:39.322 --rc genhtml_function_coverage=1 00:05:39.322 --rc genhtml_legend=1 00:05:39.322 --rc geninfo_all_blocks=1 00:05:39.322 --rc geninfo_unexecuted_blocks=1 00:05:39.322 00:05:39.322 ' 00:05:39.322 14:56:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:39.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.322 --rc genhtml_branch_coverage=1 00:05:39.322 --rc genhtml_function_coverage=1 00:05:39.322 --rc genhtml_legend=1 00:05:39.322 --rc geninfo_all_blocks=1 00:05:39.322 --rc geninfo_unexecuted_blocks=1 00:05:39.322 00:05:39.322 ' 00:05:39.322 14:56:10 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:39.322 14:56:10 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:39.322 14:56:10 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:39.322 14:56:10 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:39.322 14:56:10 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:39.322 14:56:10 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:39.322 14:56:10 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:39.322 14:56:10 -- setup/common.sh@18 -- # local node= 00:05:39.322 14:56:10 -- setup/common.sh@19 -- # local var val 00:05:39.322 14:56:10 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.322 14:56:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.322 14:56:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.322 14:56:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.322 14:56:10 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.322 14:56:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.322 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.322 14:56:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 4864012 kB' 'MemAvailable: 7365668 kB' 'Buffers: 3200 kB' 'Cached: 2705668 kB' 'SwapCached: 0 kB' 'Active: 455292 kB' 'Inactive: 2370528 kB' 'Active(anon): 127464 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370528 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 436 kB' 'Writeback: 0 kB' 'AnonPages: 118580 kB' 'Mapped: 51140 kB' 'Shmem: 10512 kB' 'KReclaimable: 80496 kB' 'Slab: 180060 kB' 'SReclaimable: 80496 kB' 'SUnreclaim: 99564 kB' 'KernelStack: 6880 kB' 'PageTables: 4632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12411008 kB' 'Committed_AS: 320936 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:39.322 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.322 14:56:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.322 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.322 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.322 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.323 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.323 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.324 14:56:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.324 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.324 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.324 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.324 14:56:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.324 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.324 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.324 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.324 14:56:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.324 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.324 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.324 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.324 14:56:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.324 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.324 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.324 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.324 14:56:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.324 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.324 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.324 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.324 14:56:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.324 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.324 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.324 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.324 14:56:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.324 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.324 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.324 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.324 14:56:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.324 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.324 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.324 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.324 14:56:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.324 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.324 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.324 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.324 14:56:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.324 14:56:10 -- setup/common.sh@32 -- # continue 00:05:39.324 14:56:10 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.324 14:56:10 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.324 14:56:10 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:39.324 14:56:10 -- setup/common.sh@33 -- # echo 2048 00:05:39.324 14:56:10 -- setup/common.sh@33 -- # return 0 00:05:39.324 14:56:10 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:39.324 14:56:10 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:39.324 14:56:10 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:39.324 14:56:10 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:39.324 14:56:10 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:39.324 14:56:10 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:39.324 14:56:10 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:39.324 14:56:10 -- setup/hugepages.sh@207 -- # get_nodes 00:05:39.324 14:56:10 -- setup/hugepages.sh@27 -- # local node 00:05:39.324 14:56:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:39.324 14:56:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:39.324 14:56:10 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:39.324 14:56:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:39.324 14:56:10 -- setup/hugepages.sh@208 -- # clear_hp 00:05:39.324 14:56:10 -- setup/hugepages.sh@37 -- # local node hp 00:05:39.324 14:56:10 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:39.324 14:56:10 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:39.324 14:56:10 -- setup/hugepages.sh@41 -- # echo 0 00:05:39.324 14:56:10 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:39.324 14:56:10 -- setup/hugepages.sh@41 -- # echo 0 00:05:39.324 14:56:10 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:39.324 14:56:10 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:39.324 14:56:10 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:39.324 14:56:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.324 14:56:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.324 14:56:10 -- common/autotest_common.sh@10 -- # set +x 00:05:39.583 ************************************ 00:05:39.583 START TEST default_setup 00:05:39.583 ************************************ 00:05:39.583 14:56:10 -- common/autotest_common.sh@1114 -- # default_setup 00:05:39.583 14:56:10 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:39.583 14:56:10 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:39.583 14:56:10 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:39.583 14:56:10 -- setup/hugepages.sh@51 -- # shift 00:05:39.583 14:56:10 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:39.583 14:56:10 -- setup/hugepages.sh@52 -- # local node_ids 00:05:39.583 14:56:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:39.584 14:56:10 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:39.584 14:56:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:39.584 14:56:10 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:39.584 14:56:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:39.584 14:56:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:39.584 14:56:10 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:39.584 14:56:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:39.584 14:56:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:39.584 14:56:10 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:39.584 14:56:10 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:39.584 14:56:10 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:39.584 14:56:10 -- setup/hugepages.sh@73 -- # return 0 00:05:39.584 14:56:10 -- setup/hugepages.sh@137 -- # setup output 00:05:39.584 14:56:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:39.584 14:56:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:40.152 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.152 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.152 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:40.416 14:56:10 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:40.416 14:56:10 -- setup/hugepages.sh@89 -- # local node 00:05:40.416 14:56:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:40.416 14:56:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:40.416 14:56:10 -- setup/hugepages.sh@92 -- # local surp 00:05:40.416 14:56:10 -- setup/hugepages.sh@93 -- # local resv 00:05:40.416 14:56:10 -- setup/hugepages.sh@94 -- # local anon 00:05:40.416 14:56:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:40.416 14:56:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:40.416 14:56:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:40.416 14:56:10 -- setup/common.sh@18 -- # local node= 00:05:40.416 14:56:10 -- setup/common.sh@19 -- # local var val 00:05:40.416 14:56:10 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.416 14:56:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.416 14:56:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.416 14:56:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.416 14:56:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.416 14:56:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.416 14:56:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6886824 kB' 'MemAvailable: 9388304 kB' 'Buffers: 3200 kB' 'Cached: 2705656 kB' 'SwapCached: 0 kB' 'Active: 457068 kB' 'Inactive: 2370532 kB' 'Active(anon): 129240 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370532 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 456 kB' 'Writeback: 0 kB' 'AnonPages: 120376 kB' 'Mapped: 51012 kB' 'Shmem: 10488 kB' 'KReclaimable: 80136 kB' 'Slab: 179864 kB' 'SReclaimable: 80136 kB' 'SUnreclaim: 99728 kB' 'KernelStack: 6800 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.416 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.416 14:56:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.417 14:56:11 -- setup/common.sh@33 -- # echo 0 00:05:40.417 14:56:11 -- setup/common.sh@33 -- # return 0 00:05:40.417 14:56:11 -- setup/hugepages.sh@97 -- # anon=0 00:05:40.417 14:56:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:40.417 14:56:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.417 14:56:11 -- setup/common.sh@18 -- # local node= 00:05:40.417 14:56:11 -- setup/common.sh@19 -- # local var val 00:05:40.417 14:56:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.417 14:56:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.417 14:56:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.417 14:56:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.417 14:56:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.417 14:56:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6886824 kB' 'MemAvailable: 9388304 kB' 'Buffers: 3200 kB' 'Cached: 2705656 kB' 'SwapCached: 0 kB' 'Active: 456792 kB' 'Inactive: 2370532 kB' 'Active(anon): 128964 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370532 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 456 kB' 'Writeback: 0 kB' 'AnonPages: 120052 kB' 'Mapped: 50924 kB' 'Shmem: 10488 kB' 'KReclaimable: 80136 kB' 'Slab: 179864 kB' 'SReclaimable: 80136 kB' 'SUnreclaim: 99728 kB' 'KernelStack: 6816 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.417 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.417 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.418 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.418 14:56:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.418 14:56:11 -- setup/common.sh@33 -- # echo 0 00:05:40.418 14:56:11 -- setup/common.sh@33 -- # return 0 00:05:40.418 14:56:11 -- setup/hugepages.sh@99 -- # surp=0 00:05:40.418 14:56:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:40.418 14:56:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:40.418 14:56:11 -- setup/common.sh@18 -- # local node= 00:05:40.418 14:56:11 -- setup/common.sh@19 -- # local var val 00:05:40.418 14:56:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.418 14:56:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.419 14:56:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.419 14:56:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.419 14:56:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.419 14:56:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6887088 kB' 'MemAvailable: 9388568 kB' 'Buffers: 3200 kB' 'Cached: 2705656 kB' 'SwapCached: 0 kB' 'Active: 456628 kB' 'Inactive: 2370532 kB' 'Active(anon): 128800 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370532 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 456 kB' 'Writeback: 0 kB' 'AnonPages: 119904 kB' 'Mapped: 50924 kB' 'Shmem: 10488 kB' 'KReclaimable: 80136 kB' 'Slab: 179860 kB' 'SReclaimable: 80136 kB' 'SUnreclaim: 99724 kB' 'KernelStack: 6832 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.419 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.419 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.420 14:56:11 -- setup/common.sh@33 -- # echo 0 00:05:40.420 14:56:11 -- setup/common.sh@33 -- # return 0 00:05:40.420 nr_hugepages=1024 00:05:40.420 resv_hugepages=0 00:05:40.420 surplus_hugepages=0 00:05:40.420 anon_hugepages=0 00:05:40.420 14:56:11 -- setup/hugepages.sh@100 -- # resv=0 00:05:40.420 14:56:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:40.420 14:56:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:40.420 14:56:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:40.420 14:56:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:40.420 14:56:11 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:40.420 14:56:11 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:40.420 14:56:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:40.420 14:56:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:40.420 14:56:11 -- setup/common.sh@18 -- # local node= 00:05:40.420 14:56:11 -- setup/common.sh@19 -- # local var val 00:05:40.420 14:56:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.420 14:56:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.420 14:56:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.420 14:56:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.420 14:56:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.420 14:56:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.420 14:56:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6886840 kB' 'MemAvailable: 9388320 kB' 'Buffers: 3200 kB' 'Cached: 2705656 kB' 'SwapCached: 0 kB' 'Active: 456572 kB' 'Inactive: 2370532 kB' 'Active(anon): 128744 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370532 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 456 kB' 'Writeback: 0 kB' 'AnonPages: 119832 kB' 'Mapped: 50924 kB' 'Shmem: 10488 kB' 'KReclaimable: 80136 kB' 'Slab: 179860 kB' 'SReclaimable: 80136 kB' 'SUnreclaim: 99724 kB' 'KernelStack: 6816 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.420 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.420 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.421 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.421 14:56:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.421 14:56:11 -- setup/common.sh@33 -- # echo 1024 00:05:40.421 14:56:11 -- setup/common.sh@33 -- # return 0 00:05:40.421 14:56:11 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:40.421 14:56:11 -- setup/hugepages.sh@112 -- # get_nodes 00:05:40.422 14:56:11 -- setup/hugepages.sh@27 -- # local node 00:05:40.422 14:56:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:40.422 14:56:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:40.422 14:56:11 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:40.422 14:56:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:40.422 14:56:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:40.422 14:56:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:40.422 14:56:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:40.422 14:56:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.422 14:56:11 -- setup/common.sh@18 -- # local node=0 00:05:40.422 14:56:11 -- setup/common.sh@19 -- # local var val 00:05:40.422 14:56:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.422 14:56:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.422 14:56:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:40.422 14:56:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:40.422 14:56:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.422 14:56:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6886840 kB' 'MemUsed: 5352276 kB' 'SwapCached: 0 kB' 'Active: 456564 kB' 'Inactive: 2370532 kB' 'Active(anon): 128736 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370532 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 456 kB' 'Writeback: 0 kB' 'FilePages: 2708856 kB' 'Mapped: 50924 kB' 'AnonPages: 119820 kB' 'Shmem: 10488 kB' 'KernelStack: 6816 kB' 'PageTables: 4440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80136 kB' 'Slab: 179860 kB' 'SReclaimable: 80136 kB' 'SUnreclaim: 99724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.422 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.422 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.423 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.423 14:56:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.423 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.423 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.423 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.423 14:56:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.423 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.423 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.423 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.423 14:56:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.423 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.423 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.423 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.423 14:56:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.423 14:56:11 -- setup/common.sh@33 -- # echo 0 00:05:40.423 14:56:11 -- setup/common.sh@33 -- # return 0 00:05:40.423 14:56:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:40.423 node0=1024 expecting 1024 00:05:40.423 14:56:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:40.423 14:56:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:40.423 14:56:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:40.423 14:56:11 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:40.423 14:56:11 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:40.423 00:05:40.423 real 0m1.053s 00:05:40.423 user 0m0.484s 00:05:40.423 sys 0m0.468s 00:05:40.423 ************************************ 00:05:40.423 END TEST default_setup 00:05:40.423 ************************************ 00:05:40.423 14:56:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:40.423 14:56:11 -- common/autotest_common.sh@10 -- # set +x 00:05:40.682 14:56:11 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:40.682 14:56:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.682 14:56:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.682 14:56:11 -- common/autotest_common.sh@10 -- # set +x 00:05:40.682 ************************************ 00:05:40.682 START TEST per_node_1G_alloc 00:05:40.682 ************************************ 00:05:40.682 14:56:11 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:05:40.683 14:56:11 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:40.683 14:56:11 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:40.683 14:56:11 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:40.683 14:56:11 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:40.683 14:56:11 -- setup/hugepages.sh@51 -- # shift 00:05:40.683 14:56:11 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:40.683 14:56:11 -- setup/hugepages.sh@52 -- # local node_ids 00:05:40.683 14:56:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:40.683 14:56:11 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:40.683 14:56:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:40.683 14:56:11 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:40.683 14:56:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:40.683 14:56:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:40.683 14:56:11 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:40.683 14:56:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:40.683 14:56:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:40.683 14:56:11 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:40.683 14:56:11 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:40.683 14:56:11 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:40.683 14:56:11 -- setup/hugepages.sh@73 -- # return 0 00:05:40.683 14:56:11 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:40.683 14:56:11 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:40.683 14:56:11 -- setup/hugepages.sh@146 -- # setup output 00:05:40.683 14:56:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.683 14:56:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:40.944 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.944 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.944 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.944 14:56:11 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:40.944 14:56:11 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:40.944 14:56:11 -- setup/hugepages.sh@89 -- # local node 00:05:40.944 14:56:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:40.944 14:56:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:40.944 14:56:11 -- setup/hugepages.sh@92 -- # local surp 00:05:40.944 14:56:11 -- setup/hugepages.sh@93 -- # local resv 00:05:40.945 14:56:11 -- setup/hugepages.sh@94 -- # local anon 00:05:40.945 14:56:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:40.945 14:56:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:40.945 14:56:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:40.945 14:56:11 -- setup/common.sh@18 -- # local node= 00:05:40.945 14:56:11 -- setup/common.sh@19 -- # local var val 00:05:40.945 14:56:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.945 14:56:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.945 14:56:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.945 14:56:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.945 14:56:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.945 14:56:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7935060 kB' 'MemAvailable: 10436556 kB' 'Buffers: 3200 kB' 'Cached: 2705656 kB' 'SwapCached: 0 kB' 'Active: 457092 kB' 'Inactive: 2370540 kB' 'Active(anon): 129264 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370540 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 456 kB' 'Writeback: 0 kB' 'AnonPages: 120200 kB' 'Mapped: 51060 kB' 'Shmem: 10488 kB' 'KReclaimable: 80152 kB' 'Slab: 179912 kB' 'SReclaimable: 80152 kB' 'SUnreclaim: 99760 kB' 'KernelStack: 6936 kB' 'PageTables: 4684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 321680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.945 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.945 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.946 14:56:11 -- setup/common.sh@33 -- # echo 0 00:05:40.946 14:56:11 -- setup/common.sh@33 -- # return 0 00:05:40.946 14:56:11 -- setup/hugepages.sh@97 -- # anon=0 00:05:40.946 14:56:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:40.946 14:56:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.946 14:56:11 -- setup/common.sh@18 -- # local node= 00:05:40.946 14:56:11 -- setup/common.sh@19 -- # local var val 00:05:40.946 14:56:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.946 14:56:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.946 14:56:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.946 14:56:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.946 14:56:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.946 14:56:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7935412 kB' 'MemAvailable: 10436908 kB' 'Buffers: 3200 kB' 'Cached: 2705656 kB' 'SwapCached: 0 kB' 'Active: 456556 kB' 'Inactive: 2370540 kB' 'Active(anon): 128728 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370540 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 456 kB' 'Writeback: 0 kB' 'AnonPages: 119860 kB' 'Mapped: 50924 kB' 'Shmem: 10488 kB' 'KReclaimable: 80152 kB' 'Slab: 179888 kB' 'SReclaimable: 80152 kB' 'SUnreclaim: 99736 kB' 'KernelStack: 6816 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 322048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.946 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.946 14:56:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.947 14:56:11 -- setup/common.sh@33 -- # echo 0 00:05:40.947 14:56:11 -- setup/common.sh@33 -- # return 0 00:05:40.947 14:56:11 -- setup/hugepages.sh@99 -- # surp=0 00:05:40.947 14:56:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:40.947 14:56:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:40.947 14:56:11 -- setup/common.sh@18 -- # local node= 00:05:40.947 14:56:11 -- setup/common.sh@19 -- # local var val 00:05:40.947 14:56:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.947 14:56:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.947 14:56:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.947 14:56:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.947 14:56:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.947 14:56:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7935412 kB' 'MemAvailable: 10436908 kB' 'Buffers: 3200 kB' 'Cached: 2705656 kB' 'SwapCached: 0 kB' 'Active: 456544 kB' 'Inactive: 2370540 kB' 'Active(anon): 128716 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370540 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 456 kB' 'Writeback: 0 kB' 'AnonPages: 119844 kB' 'Mapped: 50924 kB' 'Shmem: 10488 kB' 'KReclaimable: 80152 kB' 'Slab: 179880 kB' 'SReclaimable: 80152 kB' 'SUnreclaim: 99728 kB' 'KernelStack: 6816 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 322048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.947 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.947 14:56:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.948 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.948 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.949 14:56:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.949 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.949 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.949 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.949 14:56:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.949 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.949 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.949 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.949 14:56:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.949 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.949 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.949 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.949 14:56:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.949 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.949 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.949 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.949 14:56:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.949 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.949 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.949 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.949 14:56:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.949 14:56:11 -- setup/common.sh@33 -- # echo 0 00:05:40.949 14:56:11 -- setup/common.sh@33 -- # return 0 00:05:40.949 14:56:11 -- setup/hugepages.sh@100 -- # resv=0 00:05:40.949 14:56:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:40.949 nr_hugepages=512 00:05:40.949 14:56:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:40.949 resv_hugepages=0 00:05:40.949 14:56:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:40.949 surplus_hugepages=0 00:05:40.949 14:56:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:40.949 anon_hugepages=0 00:05:40.949 14:56:11 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:40.949 14:56:11 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:40.949 14:56:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:40.949 14:56:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:40.949 14:56:11 -- setup/common.sh@18 -- # local node= 00:05:40.949 14:56:11 -- setup/common.sh@19 -- # local var val 00:05:40.949 14:56:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.949 14:56:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.949 14:56:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.949 14:56:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.949 14:56:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.949 14:56:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.949 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.949 14:56:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7935412 kB' 'MemAvailable: 10436916 kB' 'Buffers: 3200 kB' 'Cached: 2705656 kB' 'SwapCached: 0 kB' 'Active: 456532 kB' 'Inactive: 2370540 kB' 'Active(anon): 128704 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370540 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 456 kB' 'Writeback: 0 kB' 'AnonPages: 119832 kB' 'Mapped: 50924 kB' 'Shmem: 10488 kB' 'KReclaimable: 80168 kB' 'Slab: 179896 kB' 'SReclaimable: 80168 kB' 'SUnreclaim: 99728 kB' 'KernelStack: 6816 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 322048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:40.949 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.949 14:56:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.949 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.949 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.949 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.949 14:56:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.949 14:56:11 -- setup/common.sh@32 -- # continue 00:05:40.949 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.210 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.210 14:56:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.210 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.210 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.210 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.210 14:56:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.210 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.210 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.210 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.210 14:56:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.210 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.210 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.210 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.210 14:56:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.210 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.210 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.210 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.210 14:56:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.210 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.210 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.210 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.210 14:56:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.210 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.210 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.210 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.210 14:56:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.210 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.210 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.210 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.210 14:56:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.210 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.210 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.210 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.210 14:56:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.210 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.210 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.210 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.210 14:56:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.210 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.210 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.210 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.210 14:56:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.210 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.210 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.210 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.210 14:56:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.210 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.210 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.210 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.210 14:56:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.211 14:56:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.211 14:56:11 -- setup/common.sh@33 -- # echo 512 00:05:41.211 14:56:11 -- setup/common.sh@33 -- # return 0 00:05:41.211 14:56:11 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:41.211 14:56:11 -- setup/hugepages.sh@112 -- # get_nodes 00:05:41.211 14:56:11 -- setup/hugepages.sh@27 -- # local node 00:05:41.211 14:56:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:41.211 14:56:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:41.211 14:56:11 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:41.211 14:56:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:41.211 14:56:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:41.211 14:56:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:41.211 14:56:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:41.211 14:56:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.211 14:56:11 -- setup/common.sh@18 -- # local node=0 00:05:41.211 14:56:11 -- setup/common.sh@19 -- # local var val 00:05:41.211 14:56:11 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.211 14:56:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.211 14:56:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:41.211 14:56:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:41.211 14:56:11 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.211 14:56:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.211 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7935412 kB' 'MemUsed: 4303704 kB' 'SwapCached: 0 kB' 'Active: 456632 kB' 'Inactive: 2370540 kB' 'Active(anon): 128804 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370540 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 456 kB' 'Writeback: 0 kB' 'FilePages: 2708856 kB' 'Mapped: 50924 kB' 'AnonPages: 119952 kB' 'Shmem: 10488 kB' 'KernelStack: 6832 kB' 'PageTables: 4484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80168 kB' 'Slab: 179896 kB' 'SReclaimable: 80168 kB' 'SUnreclaim: 99728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # continue 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.212 14:56:11 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.212 14:56:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.212 14:56:11 -- setup/common.sh@33 -- # echo 0 00:05:41.212 14:56:11 -- setup/common.sh@33 -- # return 0 00:05:41.212 14:56:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:41.212 14:56:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:41.212 14:56:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:41.213 14:56:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:41.213 14:56:11 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:41.213 node0=512 expecting 512 00:05:41.213 14:56:11 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:41.213 00:05:41.213 real 0m0.569s 00:05:41.213 user 0m0.273s 00:05:41.213 sys 0m0.298s 00:05:41.213 ************************************ 00:05:41.213 END TEST per_node_1G_alloc 00:05:41.213 ************************************ 00:05:41.213 14:56:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.213 14:56:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.213 14:56:11 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:41.213 14:56:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.213 14:56:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.213 14:56:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.213 ************************************ 00:05:41.213 START TEST even_2G_alloc 00:05:41.213 ************************************ 00:05:41.213 14:56:11 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:05:41.213 14:56:11 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:41.213 14:56:11 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:41.213 14:56:11 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:41.213 14:56:11 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:41.213 14:56:11 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:41.213 14:56:11 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:41.213 14:56:11 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:41.213 14:56:11 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:41.213 14:56:11 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:41.213 14:56:11 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:41.213 14:56:11 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:41.213 14:56:11 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:41.213 14:56:11 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:41.213 14:56:11 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:41.213 14:56:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:41.213 14:56:11 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:41.213 14:56:11 -- setup/hugepages.sh@83 -- # : 0 00:05:41.213 14:56:11 -- setup/hugepages.sh@84 -- # : 0 00:05:41.213 14:56:11 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:41.213 14:56:11 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:41.213 14:56:11 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:41.213 14:56:11 -- setup/hugepages.sh@153 -- # setup output 00:05:41.213 14:56:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.213 14:56:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:41.473 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:41.473 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:41.473 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:41.473 14:56:12 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:41.473 14:56:12 -- setup/hugepages.sh@89 -- # local node 00:05:41.473 14:56:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:41.473 14:56:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:41.473 14:56:12 -- setup/hugepages.sh@92 -- # local surp 00:05:41.473 14:56:12 -- setup/hugepages.sh@93 -- # local resv 00:05:41.473 14:56:12 -- setup/hugepages.sh@94 -- # local anon 00:05:41.473 14:56:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:41.473 14:56:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:41.473 14:56:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:41.474 14:56:12 -- setup/common.sh@18 -- # local node= 00:05:41.474 14:56:12 -- setup/common.sh@19 -- # local var val 00:05:41.474 14:56:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.474 14:56:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.474 14:56:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.474 14:56:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.474 14:56:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.474 14:56:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6889280 kB' 'MemAvailable: 9390784 kB' 'Buffers: 3200 kB' 'Cached: 2705656 kB' 'SwapCached: 0 kB' 'Active: 456848 kB' 'Inactive: 2370540 kB' 'Active(anon): 129020 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370540 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 456 kB' 'Writeback: 0 kB' 'AnonPages: 120128 kB' 'Mapped: 51144 kB' 'Shmem: 10488 kB' 'KReclaimable: 80168 kB' 'Slab: 179888 kB' 'SReclaimable: 80168 kB' 'SUnreclaim: 99720 kB' 'KernelStack: 6856 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.474 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.474 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:41.475 14:56:12 -- setup/common.sh@33 -- # echo 0 00:05:41.475 14:56:12 -- setup/common.sh@33 -- # return 0 00:05:41.475 14:56:12 -- setup/hugepages.sh@97 -- # anon=0 00:05:41.475 14:56:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:41.475 14:56:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.475 14:56:12 -- setup/common.sh@18 -- # local node= 00:05:41.475 14:56:12 -- setup/common.sh@19 -- # local var val 00:05:41.475 14:56:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.475 14:56:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.475 14:56:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.475 14:56:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.475 14:56:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.475 14:56:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.475 14:56:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6889056 kB' 'MemAvailable: 9390560 kB' 'Buffers: 3200 kB' 'Cached: 2705656 kB' 'SwapCached: 0 kB' 'Active: 456612 kB' 'Inactive: 2370540 kB' 'Active(anon): 128784 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370540 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 456 kB' 'Writeback: 0 kB' 'AnonPages: 119940 kB' 'Mapped: 50924 kB' 'Shmem: 10488 kB' 'KReclaimable: 80168 kB' 'Slab: 179948 kB' 'SReclaimable: 80168 kB' 'SUnreclaim: 99780 kB' 'KernelStack: 6832 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.475 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.475 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.737 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.737 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.737 14:56:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.737 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.737 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.737 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.737 14:56:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.737 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.737 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.737 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.737 14:56:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.737 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.737 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.737 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.737 14:56:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.737 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.737 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.737 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.737 14:56:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.737 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.737 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.737 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.737 14:56:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.737 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.737 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.737 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.737 14:56:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.737 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.737 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.737 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.737 14:56:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.737 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.737 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.737 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.737 14:56:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.737 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.737 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.737 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.737 14:56:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.737 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.737 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.737 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.737 14:56:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.737 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.737 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.737 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.737 14:56:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.737 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.737 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.737 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.738 14:56:12 -- setup/common.sh@33 -- # echo 0 00:05:41.738 14:56:12 -- setup/common.sh@33 -- # return 0 00:05:41.738 14:56:12 -- setup/hugepages.sh@99 -- # surp=0 00:05:41.738 14:56:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:41.738 14:56:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:41.738 14:56:12 -- setup/common.sh@18 -- # local node= 00:05:41.738 14:56:12 -- setup/common.sh@19 -- # local var val 00:05:41.738 14:56:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.738 14:56:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.738 14:56:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.738 14:56:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.738 14:56:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.738 14:56:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6889056 kB' 'MemAvailable: 9390560 kB' 'Buffers: 3200 kB' 'Cached: 2705656 kB' 'SwapCached: 0 kB' 'Active: 456580 kB' 'Inactive: 2370540 kB' 'Active(anon): 128752 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370540 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 456 kB' 'Writeback: 0 kB' 'AnonPages: 119856 kB' 'Mapped: 50924 kB' 'Shmem: 10488 kB' 'KReclaimable: 80168 kB' 'Slab: 179948 kB' 'SReclaimable: 80168 kB' 'SUnreclaim: 99780 kB' 'KernelStack: 6816 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.738 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.738 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.739 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.739 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:41.740 14:56:12 -- setup/common.sh@33 -- # echo 0 00:05:41.740 14:56:12 -- setup/common.sh@33 -- # return 0 00:05:41.740 nr_hugepages=1024 00:05:41.740 resv_hugepages=0 00:05:41.740 surplus_hugepages=0 00:05:41.740 anon_hugepages=0 00:05:41.740 14:56:12 -- setup/hugepages.sh@100 -- # resv=0 00:05:41.740 14:56:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:41.740 14:56:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:41.740 14:56:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:41.740 14:56:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:41.740 14:56:12 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:41.740 14:56:12 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:41.740 14:56:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:41.740 14:56:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:41.740 14:56:12 -- setup/common.sh@18 -- # local node= 00:05:41.740 14:56:12 -- setup/common.sh@19 -- # local var val 00:05:41.740 14:56:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.740 14:56:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.740 14:56:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:41.740 14:56:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:41.740 14:56:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.740 14:56:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.740 14:56:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6889056 kB' 'MemAvailable: 9390560 kB' 'Buffers: 3200 kB' 'Cached: 2705656 kB' 'SwapCached: 0 kB' 'Active: 456572 kB' 'Inactive: 2370540 kB' 'Active(anon): 128744 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370540 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 456 kB' 'Writeback: 0 kB' 'AnonPages: 119848 kB' 'Mapped: 50924 kB' 'Shmem: 10488 kB' 'KReclaimable: 80168 kB' 'Slab: 179948 kB' 'SReclaimable: 80168 kB' 'SUnreclaim: 99780 kB' 'KernelStack: 6800 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.740 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.740 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:41.741 14:56:12 -- setup/common.sh@33 -- # echo 1024 00:05:41.741 14:56:12 -- setup/common.sh@33 -- # return 0 00:05:41.741 14:56:12 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:41.741 14:56:12 -- setup/hugepages.sh@112 -- # get_nodes 00:05:41.741 14:56:12 -- setup/hugepages.sh@27 -- # local node 00:05:41.741 14:56:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:41.741 14:56:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:41.741 14:56:12 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:41.741 14:56:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:41.741 14:56:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:41.741 14:56:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:41.741 14:56:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:41.741 14:56:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:41.741 14:56:12 -- setup/common.sh@18 -- # local node=0 00:05:41.741 14:56:12 -- setup/common.sh@19 -- # local var val 00:05:41.741 14:56:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:41.741 14:56:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:41.741 14:56:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:41.741 14:56:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:41.741 14:56:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:41.741 14:56:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.741 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.741 14:56:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6889056 kB' 'MemUsed: 5350060 kB' 'SwapCached: 0 kB' 'Active: 456564 kB' 'Inactive: 2370540 kB' 'Active(anon): 128736 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370540 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 456 kB' 'Writeback: 0 kB' 'FilePages: 2708856 kB' 'Mapped: 50924 kB' 'AnonPages: 119840 kB' 'Shmem: 10488 kB' 'KernelStack: 6816 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80168 kB' 'Slab: 179948 kB' 'SReclaimable: 80168 kB' 'SUnreclaim: 99780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.742 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.742 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.743 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.743 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.743 14:56:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.743 14:56:12 -- setup/common.sh@32 -- # continue 00:05:41.743 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:41.743 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:41.743 14:56:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:41.743 14:56:12 -- setup/common.sh@33 -- # echo 0 00:05:41.743 14:56:12 -- setup/common.sh@33 -- # return 0 00:05:41.743 node0=1024 expecting 1024 00:05:41.743 14:56:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:41.743 14:56:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:41.743 14:56:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:41.743 14:56:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:41.743 14:56:12 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:41.743 14:56:12 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:41.743 00:05:41.743 real 0m0.551s 00:05:41.743 user 0m0.261s 00:05:41.743 sys 0m0.299s 00:05:41.743 ************************************ 00:05:41.743 END TEST even_2G_alloc 00:05:41.743 ************************************ 00:05:41.743 14:56:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:41.743 14:56:12 -- common/autotest_common.sh@10 -- # set +x 00:05:41.743 14:56:12 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:41.743 14:56:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.743 14:56:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.743 14:56:12 -- common/autotest_common.sh@10 -- # set +x 00:05:41.743 ************************************ 00:05:41.743 START TEST odd_alloc 00:05:41.743 ************************************ 00:05:41.743 14:56:12 -- common/autotest_common.sh@1114 -- # odd_alloc 00:05:41.743 14:56:12 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:41.743 14:56:12 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:41.743 14:56:12 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:41.743 14:56:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:41.743 14:56:12 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:41.743 14:56:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:41.743 14:56:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:41.743 14:56:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:41.743 14:56:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:41.743 14:56:12 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:41.743 14:56:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:41.743 14:56:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:41.743 14:56:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:41.743 14:56:12 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:41.743 14:56:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:41.743 14:56:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:41.743 14:56:12 -- setup/hugepages.sh@83 -- # : 0 00:05:41.743 14:56:12 -- setup/hugepages.sh@84 -- # : 0 00:05:41.743 14:56:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:41.743 14:56:12 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:41.743 14:56:12 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:41.743 14:56:12 -- setup/hugepages.sh@160 -- # setup output 00:05:41.743 14:56:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.743 14:56:12 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:42.002 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:42.265 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:42.265 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:42.265 14:56:12 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:42.265 14:56:12 -- setup/hugepages.sh@89 -- # local node 00:05:42.265 14:56:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:42.265 14:56:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:42.265 14:56:12 -- setup/hugepages.sh@92 -- # local surp 00:05:42.265 14:56:12 -- setup/hugepages.sh@93 -- # local resv 00:05:42.265 14:56:12 -- setup/hugepages.sh@94 -- # local anon 00:05:42.265 14:56:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:42.265 14:56:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:42.265 14:56:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:42.265 14:56:12 -- setup/common.sh@18 -- # local node= 00:05:42.265 14:56:12 -- setup/common.sh@19 -- # local var val 00:05:42.265 14:56:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:42.265 14:56:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.265 14:56:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.265 14:56:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.265 14:56:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.265 14:56:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.265 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.265 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.265 14:56:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6893468 kB' 'MemAvailable: 9394976 kB' 'Buffers: 3200 kB' 'Cached: 2705660 kB' 'SwapCached: 0 kB' 'Active: 456568 kB' 'Inactive: 2370544 kB' 'Active(anon): 128740 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119848 kB' 'Mapped: 51052 kB' 'Shmem: 10488 kB' 'KReclaimable: 80168 kB' 'Slab: 179972 kB' 'SReclaimable: 80168 kB' 'SUnreclaim: 99804 kB' 'KernelStack: 6848 kB' 'PageTables: 4552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 322048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:42.265 14:56:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.265 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.265 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.265 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.265 14:56:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.265 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.265 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.265 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.265 14:56:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.265 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.265 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.265 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.265 14:56:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.265 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.265 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.265 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.265 14:56:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.265 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.265 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.265 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.265 14:56:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.265 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.265 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.265 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.265 14:56:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.265 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.265 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.265 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.265 14:56:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.265 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.265 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.265 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.265 14:56:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.266 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.266 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.267 14:56:12 -- setup/common.sh@33 -- # echo 0 00:05:42.267 14:56:12 -- setup/common.sh@33 -- # return 0 00:05:42.267 14:56:12 -- setup/hugepages.sh@97 -- # anon=0 00:05:42.267 14:56:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:42.267 14:56:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:42.267 14:56:12 -- setup/common.sh@18 -- # local node= 00:05:42.267 14:56:12 -- setup/common.sh@19 -- # local var val 00:05:42.267 14:56:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:42.267 14:56:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.267 14:56:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.267 14:56:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.267 14:56:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.267 14:56:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.267 14:56:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6894016 kB' 'MemAvailable: 9395524 kB' 'Buffers: 3200 kB' 'Cached: 2705660 kB' 'SwapCached: 0 kB' 'Active: 456780 kB' 'Inactive: 2370544 kB' 'Active(anon): 128952 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120096 kB' 'Mapped: 51052 kB' 'Shmem: 10488 kB' 'KReclaimable: 80168 kB' 'Slab: 179972 kB' 'SReclaimable: 80168 kB' 'SUnreclaim: 99804 kB' 'KernelStack: 6848 kB' 'PageTables: 4552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 322048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.267 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.267 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.268 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.268 14:56:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.268 14:56:12 -- setup/common.sh@33 -- # echo 0 00:05:42.268 14:56:12 -- setup/common.sh@33 -- # return 0 00:05:42.268 14:56:12 -- setup/hugepages.sh@99 -- # surp=0 00:05:42.268 14:56:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:42.268 14:56:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:42.268 14:56:12 -- setup/common.sh@18 -- # local node= 00:05:42.268 14:56:12 -- setup/common.sh@19 -- # local var val 00:05:42.268 14:56:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:42.268 14:56:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.268 14:56:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.268 14:56:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.268 14:56:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.269 14:56:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6894016 kB' 'MemAvailable: 9395524 kB' 'Buffers: 3200 kB' 'Cached: 2705660 kB' 'SwapCached: 0 kB' 'Active: 456644 kB' 'Inactive: 2370544 kB' 'Active(anon): 128816 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119924 kB' 'Mapped: 50924 kB' 'Shmem: 10488 kB' 'KReclaimable: 80168 kB' 'Slab: 179972 kB' 'SReclaimable: 80168 kB' 'SUnreclaim: 99804 kB' 'KernelStack: 6832 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 322048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.269 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.269 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.270 14:56:12 -- setup/common.sh@33 -- # echo 0 00:05:42.270 14:56:12 -- setup/common.sh@33 -- # return 0 00:05:42.270 nr_hugepages=1025 00:05:42.270 resv_hugepages=0 00:05:42.270 surplus_hugepages=0 00:05:42.270 anon_hugepages=0 00:05:42.270 14:56:12 -- setup/hugepages.sh@100 -- # resv=0 00:05:42.270 14:56:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:42.270 14:56:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:42.270 14:56:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:42.270 14:56:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:42.270 14:56:12 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:42.270 14:56:12 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:42.270 14:56:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:42.270 14:56:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:42.270 14:56:12 -- setup/common.sh@18 -- # local node= 00:05:42.270 14:56:12 -- setup/common.sh@19 -- # local var val 00:05:42.270 14:56:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:42.270 14:56:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.270 14:56:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.270 14:56:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.270 14:56:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.270 14:56:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.270 14:56:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6893768 kB' 'MemAvailable: 9395276 kB' 'Buffers: 3200 kB' 'Cached: 2705660 kB' 'SwapCached: 0 kB' 'Active: 456628 kB' 'Inactive: 2370544 kB' 'Active(anon): 128800 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119908 kB' 'Mapped: 50924 kB' 'Shmem: 10488 kB' 'KReclaimable: 80168 kB' 'Slab: 179972 kB' 'SReclaimable: 80168 kB' 'SUnreclaim: 99804 kB' 'KernelStack: 6832 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458560 kB' 'Committed_AS: 322048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55544 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:42.270 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.270 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.271 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.271 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.272 14:56:12 -- setup/common.sh@32 -- # continue 00:05:42.272 14:56:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.272 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.272 14:56:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.272 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.272 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.272 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.272 14:56:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.272 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.272 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.272 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.272 14:56:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.272 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.272 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.272 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.272 14:56:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.272 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.272 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.272 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.272 14:56:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.272 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.272 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.272 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.272 14:56:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:42.272 14:56:13 -- setup/common.sh@33 -- # echo 1025 00:05:42.272 14:56:13 -- setup/common.sh@33 -- # return 0 00:05:42.272 14:56:13 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:42.272 14:56:13 -- setup/hugepages.sh@112 -- # get_nodes 00:05:42.272 14:56:13 -- setup/hugepages.sh@27 -- # local node 00:05:42.272 14:56:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:42.272 14:56:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:42.272 14:56:13 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:42.272 14:56:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:42.272 14:56:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:42.272 14:56:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:42.272 14:56:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:42.272 14:56:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:42.272 14:56:13 -- setup/common.sh@18 -- # local node=0 00:05:42.272 14:56:13 -- setup/common.sh@19 -- # local var val 00:05:42.272 14:56:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:42.272 14:56:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.272 14:56:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:42.272 14:56:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:42.272 14:56:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.272 14:56:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.272 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.272 14:56:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6893768 kB' 'MemUsed: 5345348 kB' 'SwapCached: 0 kB' 'Active: 456616 kB' 'Inactive: 2370544 kB' 'Active(anon): 128788 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2708860 kB' 'Mapped: 50924 kB' 'AnonPages: 119892 kB' 'Shmem: 10488 kB' 'KernelStack: 6832 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80168 kB' 'Slab: 179972 kB' 'SReclaimable: 80168 kB' 'SUnreclaim: 99804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:42.272 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.272 14:56:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.272 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.272 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.273 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.273 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.274 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.274 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.274 14:56:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.274 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.274 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.274 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.274 14:56:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.274 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.274 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.274 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.274 14:56:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.274 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.274 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.274 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.274 14:56:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.274 14:56:13 -- setup/common.sh@33 -- # echo 0 00:05:42.274 14:56:13 -- setup/common.sh@33 -- # return 0 00:05:42.274 node0=1025 expecting 1025 00:05:42.274 14:56:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:42.274 14:56:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:42.274 14:56:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:42.274 14:56:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:42.274 14:56:13 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:42.274 14:56:13 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:42.274 00:05:42.274 real 0m0.589s 00:05:42.274 user 0m0.270s 00:05:42.274 sys 0m0.316s 00:05:42.274 14:56:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:42.274 14:56:13 -- common/autotest_common.sh@10 -- # set +x 00:05:42.274 ************************************ 00:05:42.274 END TEST odd_alloc 00:05:42.274 ************************************ 00:05:42.533 14:56:13 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:42.533 14:56:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.533 14:56:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.533 14:56:13 -- common/autotest_common.sh@10 -- # set +x 00:05:42.533 ************************************ 00:05:42.533 START TEST custom_alloc 00:05:42.533 ************************************ 00:05:42.533 14:56:13 -- common/autotest_common.sh@1114 -- # custom_alloc 00:05:42.533 14:56:13 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:42.533 14:56:13 -- setup/hugepages.sh@169 -- # local node 00:05:42.533 14:56:13 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:42.533 14:56:13 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:42.533 14:56:13 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:42.533 14:56:13 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:42.533 14:56:13 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:42.533 14:56:13 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:42.533 14:56:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:42.533 14:56:13 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:42.533 14:56:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:42.533 14:56:13 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:42.533 14:56:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:42.533 14:56:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:42.533 14:56:13 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:42.533 14:56:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:42.533 14:56:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:42.533 14:56:13 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:42.533 14:56:13 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:42.533 14:56:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:42.533 14:56:13 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:42.533 14:56:13 -- setup/hugepages.sh@83 -- # : 0 00:05:42.533 14:56:13 -- setup/hugepages.sh@84 -- # : 0 00:05:42.533 14:56:13 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:42.533 14:56:13 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:42.533 14:56:13 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:42.533 14:56:13 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:42.533 14:56:13 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:42.533 14:56:13 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:42.533 14:56:13 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:42.533 14:56:13 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:42.533 14:56:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:42.533 14:56:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:42.533 14:56:13 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:42.533 14:56:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:42.533 14:56:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:42.533 14:56:13 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:42.533 14:56:13 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:42.533 14:56:13 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:42.533 14:56:13 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:42.533 14:56:13 -- setup/hugepages.sh@78 -- # return 0 00:05:42.533 14:56:13 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:42.533 14:56:13 -- setup/hugepages.sh@187 -- # setup output 00:05:42.533 14:56:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:42.533 14:56:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:42.794 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:42.794 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:42.794 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:42.794 14:56:13 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:42.794 14:56:13 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:42.794 14:56:13 -- setup/hugepages.sh@89 -- # local node 00:05:42.794 14:56:13 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:42.794 14:56:13 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:42.794 14:56:13 -- setup/hugepages.sh@92 -- # local surp 00:05:42.794 14:56:13 -- setup/hugepages.sh@93 -- # local resv 00:05:42.794 14:56:13 -- setup/hugepages.sh@94 -- # local anon 00:05:42.794 14:56:13 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:42.794 14:56:13 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:42.794 14:56:13 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:42.794 14:56:13 -- setup/common.sh@18 -- # local node= 00:05:42.794 14:56:13 -- setup/common.sh@19 -- # local var val 00:05:42.794 14:56:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:42.794 14:56:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.794 14:56:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.794 14:56:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.794 14:56:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.794 14:56:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.794 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7945772 kB' 'MemAvailable: 10447280 kB' 'Buffers: 3200 kB' 'Cached: 2705660 kB' 'SwapCached: 0 kB' 'Active: 457468 kB' 'Inactive: 2370544 kB' 'Active(anon): 129640 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120736 kB' 'Mapped: 50932 kB' 'Shmem: 10488 kB' 'KReclaimable: 80168 kB' 'Slab: 179960 kB' 'SReclaimable: 80168 kB' 'SUnreclaim: 99792 kB' 'KernelStack: 6928 kB' 'PageTables: 4780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 322048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.795 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.795 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:42.796 14:56:13 -- setup/common.sh@33 -- # echo 0 00:05:42.796 14:56:13 -- setup/common.sh@33 -- # return 0 00:05:42.796 14:56:13 -- setup/hugepages.sh@97 -- # anon=0 00:05:42.796 14:56:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:42.796 14:56:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:42.796 14:56:13 -- setup/common.sh@18 -- # local node= 00:05:42.796 14:56:13 -- setup/common.sh@19 -- # local var val 00:05:42.796 14:56:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:42.796 14:56:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.796 14:56:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.796 14:56:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.796 14:56:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.796 14:56:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7945268 kB' 'MemAvailable: 10446776 kB' 'Buffers: 3200 kB' 'Cached: 2705660 kB' 'SwapCached: 0 kB' 'Active: 456688 kB' 'Inactive: 2370544 kB' 'Active(anon): 128860 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119948 kB' 'Mapped: 50816 kB' 'Shmem: 10488 kB' 'KReclaimable: 80168 kB' 'Slab: 179976 kB' 'SReclaimable: 80168 kB' 'SUnreclaim: 99808 kB' 'KernelStack: 6856 kB' 'PageTables: 4680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 322048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.796 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.796 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:42.797 14:56:13 -- setup/common.sh@33 -- # echo 0 00:05:42.797 14:56:13 -- setup/common.sh@33 -- # return 0 00:05:42.797 14:56:13 -- setup/hugepages.sh@99 -- # surp=0 00:05:42.797 14:56:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:42.797 14:56:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:42.797 14:56:13 -- setup/common.sh@18 -- # local node= 00:05:42.797 14:56:13 -- setup/common.sh@19 -- # local var val 00:05:42.797 14:56:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:42.797 14:56:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:42.797 14:56:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:42.797 14:56:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:42.797 14:56:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:42.797 14:56:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7945268 kB' 'MemAvailable: 10446776 kB' 'Buffers: 3200 kB' 'Cached: 2705660 kB' 'SwapCached: 0 kB' 'Active: 456696 kB' 'Inactive: 2370544 kB' 'Active(anon): 128868 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119952 kB' 'Mapped: 50816 kB' 'Shmem: 10488 kB' 'KReclaimable: 80168 kB' 'Slab: 179972 kB' 'SReclaimable: 80168 kB' 'SUnreclaim: 99804 kB' 'KernelStack: 6856 kB' 'PageTables: 4680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 322048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.797 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.797 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.798 14:56:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.798 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.798 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.798 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.798 14:56:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.798 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.798 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.798 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.798 14:56:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.798 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.798 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.798 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.798 14:56:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.798 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.798 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.798 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.798 14:56:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.798 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.798 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.798 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.798 14:56:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.798 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.798 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.798 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.798 14:56:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.798 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.798 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.798 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.798 14:56:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.798 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.798 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.798 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.798 14:56:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.798 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.798 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.798 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.798 14:56:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.798 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.798 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.798 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.798 14:56:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.798 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.798 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.798 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.798 14:56:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:42.798 14:56:13 -- setup/common.sh@32 -- # continue 00:05:42.798 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:42.798 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:42.798 14:56:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.059 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.059 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.059 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.059 14:56:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.059 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.059 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.059 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.059 14:56:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.059 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.059 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.059 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.059 14:56:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.059 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.059 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.059 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.059 14:56:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.059 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.059 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.059 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.059 14:56:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.059 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.059 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.059 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.059 14:56:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.059 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.059 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.059 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.059 14:56:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.059 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.059 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.059 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.059 14:56:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.059 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.059 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.059 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.059 14:56:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.059 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.059 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.059 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.059 14:56:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.059 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.059 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.059 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.059 14:56:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.059 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.059 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.059 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.059 14:56:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.059 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.059 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.059 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.059 14:56:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.060 14:56:13 -- setup/common.sh@33 -- # echo 0 00:05:43.060 14:56:13 -- setup/common.sh@33 -- # return 0 00:05:43.060 14:56:13 -- setup/hugepages.sh@100 -- # resv=0 00:05:43.060 nr_hugepages=512 00:05:43.060 resv_hugepages=0 00:05:43.060 surplus_hugepages=0 00:05:43.060 anon_hugepages=0 00:05:43.060 14:56:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:43.060 14:56:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:43.060 14:56:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:43.060 14:56:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:43.060 14:56:13 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:43.060 14:56:13 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:43.060 14:56:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:43.060 14:56:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:43.060 14:56:13 -- setup/common.sh@18 -- # local node= 00:05:43.060 14:56:13 -- setup/common.sh@19 -- # local var val 00:05:43.060 14:56:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:43.060 14:56:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.060 14:56:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.060 14:56:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.060 14:56:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.060 14:56:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.060 14:56:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7945268 kB' 'MemAvailable: 10446776 kB' 'Buffers: 3200 kB' 'Cached: 2705660 kB' 'SwapCached: 0 kB' 'Active: 456728 kB' 'Inactive: 2370544 kB' 'Active(anon): 128900 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119980 kB' 'Mapped: 50816 kB' 'Shmem: 10488 kB' 'KReclaimable: 80168 kB' 'Slab: 179940 kB' 'SReclaimable: 80168 kB' 'SUnreclaim: 99772 kB' 'KernelStack: 6856 kB' 'PageTables: 4680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13983872 kB' 'Committed_AS: 322048 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.060 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.060 14:56:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.061 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.061 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.062 14:56:13 -- setup/common.sh@33 -- # echo 512 00:05:43.062 14:56:13 -- setup/common.sh@33 -- # return 0 00:05:43.062 14:56:13 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:43.062 14:56:13 -- setup/hugepages.sh@112 -- # get_nodes 00:05:43.062 14:56:13 -- setup/hugepages.sh@27 -- # local node 00:05:43.062 14:56:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:43.062 14:56:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:43.062 14:56:13 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:43.062 14:56:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:43.062 14:56:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:43.062 14:56:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:43.062 14:56:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:43.062 14:56:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:43.062 14:56:13 -- setup/common.sh@18 -- # local node=0 00:05:43.062 14:56:13 -- setup/common.sh@19 -- # local var val 00:05:43.062 14:56:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:43.062 14:56:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.062 14:56:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:43.062 14:56:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:43.062 14:56:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.062 14:56:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.062 14:56:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 7945268 kB' 'MemUsed: 4293848 kB' 'SwapCached: 0 kB' 'Active: 456696 kB' 'Inactive: 2370544 kB' 'Active(anon): 128868 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2708860 kB' 'Mapped: 50816 kB' 'AnonPages: 119948 kB' 'Shmem: 10488 kB' 'KernelStack: 6856 kB' 'PageTables: 4680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80168 kB' 'Slab: 179932 kB' 'SReclaimable: 80168 kB' 'SUnreclaim: 99764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.062 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.062 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # continue 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.063 14:56:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.063 14:56:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.063 14:56:13 -- setup/common.sh@33 -- # echo 0 00:05:43.063 14:56:13 -- setup/common.sh@33 -- # return 0 00:05:43.063 14:56:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:43.063 14:56:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:43.063 14:56:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:43.063 14:56:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:43.063 node0=512 expecting 512 00:05:43.063 14:56:13 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:43.063 14:56:13 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:43.063 00:05:43.063 real 0m0.598s 00:05:43.063 user 0m0.260s 00:05:43.063 sys 0m0.326s 00:05:43.063 14:56:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:43.063 ************************************ 00:05:43.063 END TEST custom_alloc 00:05:43.063 14:56:13 -- common/autotest_common.sh@10 -- # set +x 00:05:43.063 ************************************ 00:05:43.063 14:56:13 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:43.063 14:56:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.063 14:56:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.063 14:56:13 -- common/autotest_common.sh@10 -- # set +x 00:05:43.063 ************************************ 00:05:43.063 START TEST no_shrink_alloc 00:05:43.063 ************************************ 00:05:43.063 14:56:13 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:05:43.063 14:56:13 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:43.063 14:56:13 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:43.063 14:56:13 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:43.063 14:56:13 -- setup/hugepages.sh@51 -- # shift 00:05:43.063 14:56:13 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:43.063 14:56:13 -- setup/hugepages.sh@52 -- # local node_ids 00:05:43.063 14:56:13 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:43.063 14:56:13 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:43.064 14:56:13 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:43.064 14:56:13 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:43.064 14:56:13 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:43.064 14:56:13 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:43.064 14:56:13 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:43.064 14:56:13 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:43.064 14:56:13 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:43.064 14:56:13 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:43.064 14:56:13 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:43.064 14:56:13 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:43.064 14:56:13 -- setup/hugepages.sh@73 -- # return 0 00:05:43.064 14:56:13 -- setup/hugepages.sh@198 -- # setup output 00:05:43.064 14:56:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.064 14:56:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:43.323 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:43.323 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:43.323 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:43.323 14:56:14 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:43.323 14:56:14 -- setup/hugepages.sh@89 -- # local node 00:05:43.323 14:56:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:43.323 14:56:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:43.323 14:56:14 -- setup/hugepages.sh@92 -- # local surp 00:05:43.323 14:56:14 -- setup/hugepages.sh@93 -- # local resv 00:05:43.323 14:56:14 -- setup/hugepages.sh@94 -- # local anon 00:05:43.323 14:56:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:43.323 14:56:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:43.323 14:56:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:43.323 14:56:14 -- setup/common.sh@18 -- # local node= 00:05:43.323 14:56:14 -- setup/common.sh@19 -- # local var val 00:05:43.323 14:56:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:43.586 14:56:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.586 14:56:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.586 14:56:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.586 14:56:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.586 14:56:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.586 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.586 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.586 14:56:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6966080 kB' 'MemAvailable: 9467588 kB' 'Buffers: 3200 kB' 'Cached: 2705660 kB' 'SwapCached: 0 kB' 'Active: 457132 kB' 'Inactive: 2370544 kB' 'Active(anon): 129304 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120660 kB' 'Mapped: 51028 kB' 'Shmem: 10488 kB' 'KReclaimable: 80168 kB' 'Slab: 179904 kB' 'SReclaimable: 80168 kB' 'SUnreclaim: 99736 kB' 'KernelStack: 6920 kB' 'PageTables: 4632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:43.586 14:56:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.586 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.586 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.586 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.586 14:56:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.586 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.586 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.586 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.586 14:56:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.586 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.586 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.586 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.586 14:56:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.587 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.587 14:56:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:43.588 14:56:14 -- setup/common.sh@33 -- # echo 0 00:05:43.588 14:56:14 -- setup/common.sh@33 -- # return 0 00:05:43.588 14:56:14 -- setup/hugepages.sh@97 -- # anon=0 00:05:43.588 14:56:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:43.588 14:56:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:43.588 14:56:14 -- setup/common.sh@18 -- # local node= 00:05:43.588 14:56:14 -- setup/common.sh@19 -- # local var val 00:05:43.588 14:56:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:43.588 14:56:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.588 14:56:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.588 14:56:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.588 14:56:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.588 14:56:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6966728 kB' 'MemAvailable: 9468236 kB' 'Buffers: 3200 kB' 'Cached: 2705660 kB' 'SwapCached: 0 kB' 'Active: 456888 kB' 'Inactive: 2370544 kB' 'Active(anon): 129060 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120456 kB' 'Mapped: 50936 kB' 'Shmem: 10488 kB' 'KReclaimable: 80164 kB' 'Slab: 179924 kB' 'SReclaimable: 80164 kB' 'SUnreclaim: 99760 kB' 'KernelStack: 6960 kB' 'PageTables: 4872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.588 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.588 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.589 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.589 14:56:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.589 14:56:14 -- setup/common.sh@33 -- # echo 0 00:05:43.589 14:56:14 -- setup/common.sh@33 -- # return 0 00:05:43.589 14:56:14 -- setup/hugepages.sh@99 -- # surp=0 00:05:43.589 14:56:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:43.589 14:56:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:43.589 14:56:14 -- setup/common.sh@18 -- # local node= 00:05:43.589 14:56:14 -- setup/common.sh@19 -- # local var val 00:05:43.589 14:56:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:43.589 14:56:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.590 14:56:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.590 14:56:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.590 14:56:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.590 14:56:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6966728 kB' 'MemAvailable: 9468236 kB' 'Buffers: 3200 kB' 'Cached: 2705660 kB' 'SwapCached: 0 kB' 'Active: 456884 kB' 'Inactive: 2370544 kB' 'Active(anon): 129056 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120456 kB' 'Mapped: 50936 kB' 'Shmem: 10488 kB' 'KReclaimable: 80164 kB' 'Slab: 179912 kB' 'SReclaimable: 80164 kB' 'SUnreclaim: 99748 kB' 'KernelStack: 6960 kB' 'PageTables: 4872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.590 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.590 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:43.591 14:56:14 -- setup/common.sh@33 -- # echo 0 00:05:43.591 14:56:14 -- setup/common.sh@33 -- # return 0 00:05:43.591 nr_hugepages=1024 00:05:43.591 resv_hugepages=0 00:05:43.591 14:56:14 -- setup/hugepages.sh@100 -- # resv=0 00:05:43.591 14:56:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:43.591 14:56:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:43.591 surplus_hugepages=0 00:05:43.591 anon_hugepages=0 00:05:43.591 14:56:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:43.591 14:56:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:43.591 14:56:14 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:43.591 14:56:14 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:43.591 14:56:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:43.591 14:56:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:43.591 14:56:14 -- setup/common.sh@18 -- # local node= 00:05:43.591 14:56:14 -- setup/common.sh@19 -- # local var val 00:05:43.591 14:56:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:43.591 14:56:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.591 14:56:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:43.591 14:56:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:43.591 14:56:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.591 14:56:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6966728 kB' 'MemAvailable: 9468236 kB' 'Buffers: 3200 kB' 'Cached: 2705660 kB' 'SwapCached: 0 kB' 'Active: 456700 kB' 'Inactive: 2370544 kB' 'Active(anon): 128872 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120004 kB' 'Mapped: 51196 kB' 'Shmem: 10488 kB' 'KReclaimable: 80164 kB' 'Slab: 179912 kB' 'SReclaimable: 80164 kB' 'SUnreclaim: 99748 kB' 'KernelStack: 6824 kB' 'PageTables: 4612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 322380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.591 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.591 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.592 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.592 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:43.593 14:56:14 -- setup/common.sh@33 -- # echo 1024 00:05:43.593 14:56:14 -- setup/common.sh@33 -- # return 0 00:05:43.593 14:56:14 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:43.593 14:56:14 -- setup/hugepages.sh@112 -- # get_nodes 00:05:43.593 14:56:14 -- setup/hugepages.sh@27 -- # local node 00:05:43.593 14:56:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:43.593 14:56:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:43.593 14:56:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:43.593 14:56:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:43.593 14:56:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:43.593 14:56:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:43.593 14:56:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:43.593 14:56:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:43.593 14:56:14 -- setup/common.sh@18 -- # local node=0 00:05:43.593 14:56:14 -- setup/common.sh@19 -- # local var val 00:05:43.593 14:56:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:43.593 14:56:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:43.593 14:56:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:43.593 14:56:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:43.593 14:56:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:43.593 14:56:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6966980 kB' 'MemUsed: 5272136 kB' 'SwapCached: 0 kB' 'Active: 456988 kB' 'Inactive: 2370544 kB' 'Active(anon): 129160 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2708860 kB' 'Mapped: 51308 kB' 'AnonPages: 119804 kB' 'Shmem: 10488 kB' 'KernelStack: 6840 kB' 'PageTables: 4644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80164 kB' 'Slab: 179892 kB' 'SReclaimable: 80164 kB' 'SUnreclaim: 99728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.593 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.593 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # continue 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:43.594 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:43.594 14:56:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:43.594 14:56:14 -- setup/common.sh@33 -- # echo 0 00:05:43.594 14:56:14 -- setup/common.sh@33 -- # return 0 00:05:43.594 14:56:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:43.594 14:56:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:43.594 14:56:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:43.594 14:56:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:43.594 14:56:14 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:43.594 node0=1024 expecting 1024 00:05:43.594 14:56:14 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:43.594 14:56:14 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:43.594 14:56:14 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:43.594 14:56:14 -- setup/hugepages.sh@202 -- # setup output 00:05:43.594 14:56:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.594 14:56:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:43.853 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:44.116 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:44.116 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:44.116 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:44.116 14:56:14 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:44.116 14:56:14 -- setup/hugepages.sh@89 -- # local node 00:05:44.116 14:56:14 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:44.116 14:56:14 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:44.116 14:56:14 -- setup/hugepages.sh@92 -- # local surp 00:05:44.116 14:56:14 -- setup/hugepages.sh@93 -- # local resv 00:05:44.116 14:56:14 -- setup/hugepages.sh@94 -- # local anon 00:05:44.116 14:56:14 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:44.116 14:56:14 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:44.116 14:56:14 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:44.116 14:56:14 -- setup/common.sh@18 -- # local node= 00:05:44.116 14:56:14 -- setup/common.sh@19 -- # local var val 00:05:44.116 14:56:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:44.116 14:56:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.116 14:56:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.116 14:56:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.116 14:56:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.116 14:56:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.116 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.116 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.116 14:56:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6968308 kB' 'MemAvailable: 9469800 kB' 'Buffers: 3200 kB' 'Cached: 2705660 kB' 'SwapCached: 0 kB' 'Active: 454204 kB' 'Inactive: 2370544 kB' 'Active(anon): 126376 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117500 kB' 'Mapped: 50236 kB' 'Shmem: 10488 kB' 'KReclaimable: 80136 kB' 'Slab: 179692 kB' 'SReclaimable: 80136 kB' 'SUnreclaim: 99556 kB' 'KernelStack: 6712 kB' 'PageTables: 3984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 306084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55480 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:44.116 14:56:14 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.116 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.116 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.116 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.116 14:56:14 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.116 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.116 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.116 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.116 14:56:14 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.116 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.116 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.116 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.116 14:56:14 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.117 14:56:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:44.117 14:56:14 -- setup/common.sh@33 -- # echo 0 00:05:44.117 14:56:14 -- setup/common.sh@33 -- # return 0 00:05:44.117 14:56:14 -- setup/hugepages.sh@97 -- # anon=0 00:05:44.117 14:56:14 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:44.117 14:56:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:44.117 14:56:14 -- setup/common.sh@18 -- # local node= 00:05:44.117 14:56:14 -- setup/common.sh@19 -- # local var val 00:05:44.117 14:56:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:44.117 14:56:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.117 14:56:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.117 14:56:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.117 14:56:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.117 14:56:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.117 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6968560 kB' 'MemAvailable: 9470052 kB' 'Buffers: 3200 kB' 'Cached: 2705660 kB' 'SwapCached: 0 kB' 'Active: 453960 kB' 'Inactive: 2370544 kB' 'Active(anon): 126132 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117312 kB' 'Mapped: 50236 kB' 'Shmem: 10488 kB' 'KReclaimable: 80136 kB' 'Slab: 179656 kB' 'SReclaimable: 80136 kB' 'SUnreclaim: 99520 kB' 'KernelStack: 6692 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.118 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.118 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.119 14:56:14 -- setup/common.sh@33 -- # echo 0 00:05:44.119 14:56:14 -- setup/common.sh@33 -- # return 0 00:05:44.119 14:56:14 -- setup/hugepages.sh@99 -- # surp=0 00:05:44.119 14:56:14 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:44.119 14:56:14 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:44.119 14:56:14 -- setup/common.sh@18 -- # local node= 00:05:44.119 14:56:14 -- setup/common.sh@19 -- # local var val 00:05:44.119 14:56:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:44.119 14:56:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.119 14:56:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.119 14:56:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.119 14:56:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.119 14:56:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6968560 kB' 'MemAvailable: 9470052 kB' 'Buffers: 3200 kB' 'Cached: 2705660 kB' 'SwapCached: 0 kB' 'Active: 453860 kB' 'Inactive: 2370544 kB' 'Active(anon): 126032 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117212 kB' 'Mapped: 50236 kB' 'Shmem: 10488 kB' 'KReclaimable: 80136 kB' 'Slab: 179652 kB' 'SReclaimable: 80136 kB' 'SUnreclaim: 99516 kB' 'KernelStack: 6660 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55432 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.119 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.119 14:56:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.120 14:56:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:44.120 14:56:14 -- setup/common.sh@33 -- # echo 0 00:05:44.120 14:56:14 -- setup/common.sh@33 -- # return 0 00:05:44.120 14:56:14 -- setup/hugepages.sh@100 -- # resv=0 00:05:44.120 nr_hugepages=1024 00:05:44.120 14:56:14 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:44.120 resv_hugepages=0 00:05:44.120 14:56:14 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:44.120 surplus_hugepages=0 00:05:44.120 14:56:14 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:44.120 anon_hugepages=0 00:05:44.120 14:56:14 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:44.120 14:56:14 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:44.120 14:56:14 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:44.120 14:56:14 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:44.120 14:56:14 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:44.120 14:56:14 -- setup/common.sh@18 -- # local node= 00:05:44.120 14:56:14 -- setup/common.sh@19 -- # local var val 00:05:44.120 14:56:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:44.120 14:56:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.120 14:56:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:44.120 14:56:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:44.120 14:56:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.120 14:56:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.120 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6968560 kB' 'MemAvailable: 9470052 kB' 'Buffers: 3200 kB' 'Cached: 2705660 kB' 'SwapCached: 0 kB' 'Active: 453628 kB' 'Inactive: 2370544 kB' 'Active(anon): 125800 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 116980 kB' 'Mapped: 50236 kB' 'Shmem: 10488 kB' 'KReclaimable: 80136 kB' 'Slab: 179652 kB' 'SReclaimable: 80136 kB' 'SUnreclaim: 99516 kB' 'KernelStack: 6660 kB' 'PageTables: 4024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459584 kB' 'Committed_AS: 304012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55448 kB' 'VmallocChunk: 0 kB' 'Percpu: 6384 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 6103040 kB' 'DirectMap1G: 8388608 kB' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.121 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.121 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:44.122 14:56:14 -- setup/common.sh@33 -- # echo 1024 00:05:44.122 14:56:14 -- setup/common.sh@33 -- # return 0 00:05:44.122 14:56:14 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:44.122 14:56:14 -- setup/hugepages.sh@112 -- # get_nodes 00:05:44.122 14:56:14 -- setup/hugepages.sh@27 -- # local node 00:05:44.122 14:56:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:44.122 14:56:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:44.122 14:56:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:44.122 14:56:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:44.122 14:56:14 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:44.122 14:56:14 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:44.122 14:56:14 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:44.122 14:56:14 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:44.122 14:56:14 -- setup/common.sh@18 -- # local node=0 00:05:44.122 14:56:14 -- setup/common.sh@19 -- # local var val 00:05:44.122 14:56:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:44.122 14:56:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:44.122 14:56:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:44.122 14:56:14 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:44.122 14:56:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:44.122 14:56:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12239116 kB' 'MemFree: 6971832 kB' 'MemUsed: 5267284 kB' 'SwapCached: 0 kB' 'Active: 453840 kB' 'Inactive: 2370544 kB' 'Active(anon): 126012 kB' 'Inactive(anon): 0 kB' 'Active(file): 327828 kB' 'Inactive(file): 2370544 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2708860 kB' 'Mapped: 50076 kB' 'AnonPages: 117160 kB' 'Shmem: 10488 kB' 'KernelStack: 6720 kB' 'PageTables: 3932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 80136 kB' 'Slab: 179620 kB' 'SReclaimable: 80136 kB' 'SUnreclaim: 99484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.122 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.122 14:56:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # continue 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:44.123 14:56:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:44.123 14:56:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:44.123 14:56:14 -- setup/common.sh@33 -- # echo 0 00:05:44.123 14:56:14 -- setup/common.sh@33 -- # return 0 00:05:44.123 14:56:14 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:44.123 14:56:14 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:44.123 14:56:14 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:44.123 14:56:14 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:44.123 node0=1024 expecting 1024 00:05:44.123 14:56:14 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:44.123 14:56:14 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:44.123 00:05:44.123 real 0m1.132s 00:05:44.123 user 0m0.574s 00:05:44.123 sys 0m0.593s 00:05:44.123 14:56:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.123 14:56:14 -- common/autotest_common.sh@10 -- # set +x 00:05:44.123 ************************************ 00:05:44.123 END TEST no_shrink_alloc 00:05:44.123 ************************************ 00:05:44.383 14:56:14 -- setup/hugepages.sh@217 -- # clear_hp 00:05:44.383 14:56:14 -- setup/hugepages.sh@37 -- # local node hp 00:05:44.383 14:56:14 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:44.383 14:56:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:44.383 14:56:14 -- setup/hugepages.sh@41 -- # echo 0 00:05:44.383 14:56:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:44.383 14:56:14 -- setup/hugepages.sh@41 -- # echo 0 00:05:44.383 14:56:14 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:44.383 14:56:14 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:44.383 00:05:44.383 real 0m5.062s 00:05:44.383 user 0m2.398s 00:05:44.383 sys 0m2.564s 00:05:44.383 14:56:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:44.383 14:56:14 -- common/autotest_common.sh@10 -- # set +x 00:05:44.383 ************************************ 00:05:44.383 END TEST hugepages 00:05:44.383 ************************************ 00:05:44.383 14:56:14 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:44.383 14:56:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.383 14:56:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.383 14:56:14 -- common/autotest_common.sh@10 -- # set +x 00:05:44.383 ************************************ 00:05:44.383 START TEST driver 00:05:44.383 ************************************ 00:05:44.383 14:56:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:44.383 * Looking for test storage... 00:05:44.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:44.383 14:56:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:44.383 14:56:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:44.383 14:56:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:44.383 14:56:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:44.383 14:56:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:44.383 14:56:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:44.383 14:56:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:44.383 14:56:15 -- scripts/common.sh@335 -- # IFS=.-: 00:05:44.383 14:56:15 -- scripts/common.sh@335 -- # read -ra ver1 00:05:44.383 14:56:15 -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.383 14:56:15 -- scripts/common.sh@336 -- # read -ra ver2 00:05:44.383 14:56:15 -- scripts/common.sh@337 -- # local 'op=<' 00:05:44.383 14:56:15 -- scripts/common.sh@339 -- # ver1_l=2 00:05:44.383 14:56:15 -- scripts/common.sh@340 -- # ver2_l=1 00:05:44.383 14:56:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:44.383 14:56:15 -- scripts/common.sh@343 -- # case "$op" in 00:05:44.383 14:56:15 -- scripts/common.sh@344 -- # : 1 00:05:44.383 14:56:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:44.383 14:56:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.383 14:56:15 -- scripts/common.sh@364 -- # decimal 1 00:05:44.383 14:56:15 -- scripts/common.sh@352 -- # local d=1 00:05:44.383 14:56:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.383 14:56:15 -- scripts/common.sh@354 -- # echo 1 00:05:44.383 14:56:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:44.383 14:56:15 -- scripts/common.sh@365 -- # decimal 2 00:05:44.383 14:56:15 -- scripts/common.sh@352 -- # local d=2 00:05:44.383 14:56:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.383 14:56:15 -- scripts/common.sh@354 -- # echo 2 00:05:44.383 14:56:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:44.383 14:56:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:44.383 14:56:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:44.383 14:56:15 -- scripts/common.sh@367 -- # return 0 00:05:44.383 14:56:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.383 14:56:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:44.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.383 --rc genhtml_branch_coverage=1 00:05:44.383 --rc genhtml_function_coverage=1 00:05:44.383 --rc genhtml_legend=1 00:05:44.383 --rc geninfo_all_blocks=1 00:05:44.383 --rc geninfo_unexecuted_blocks=1 00:05:44.383 00:05:44.383 ' 00:05:44.383 14:56:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:44.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.383 --rc genhtml_branch_coverage=1 00:05:44.383 --rc genhtml_function_coverage=1 00:05:44.383 --rc genhtml_legend=1 00:05:44.383 --rc geninfo_all_blocks=1 00:05:44.383 --rc geninfo_unexecuted_blocks=1 00:05:44.383 00:05:44.383 ' 00:05:44.383 14:56:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:44.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.383 --rc genhtml_branch_coverage=1 00:05:44.383 --rc genhtml_function_coverage=1 00:05:44.383 --rc genhtml_legend=1 00:05:44.383 --rc geninfo_all_blocks=1 00:05:44.383 --rc geninfo_unexecuted_blocks=1 00:05:44.383 00:05:44.383 ' 00:05:44.383 14:56:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:44.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.383 --rc genhtml_branch_coverage=1 00:05:44.383 --rc genhtml_function_coverage=1 00:05:44.383 --rc genhtml_legend=1 00:05:44.383 --rc geninfo_all_blocks=1 00:05:44.383 --rc geninfo_unexecuted_blocks=1 00:05:44.383 00:05:44.383 ' 00:05:44.383 14:56:15 -- setup/driver.sh@68 -- # setup reset 00:05:44.383 14:56:15 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:44.383 14:56:15 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:45.051 14:56:15 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:45.051 14:56:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.051 14:56:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.051 14:56:15 -- common/autotest_common.sh@10 -- # set +x 00:05:45.051 ************************************ 00:05:45.051 START TEST guess_driver 00:05:45.051 ************************************ 00:05:45.051 14:56:15 -- common/autotest_common.sh@1114 -- # guess_driver 00:05:45.051 14:56:15 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:45.051 14:56:15 -- setup/driver.sh@47 -- # local fail=0 00:05:45.051 14:56:15 -- setup/driver.sh@49 -- # pick_driver 00:05:45.051 14:56:15 -- setup/driver.sh@36 -- # vfio 00:05:45.051 14:56:15 -- setup/driver.sh@21 -- # local iommu_grups 00:05:45.051 14:56:15 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:45.051 14:56:15 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:45.051 14:56:15 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:45.051 14:56:15 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:45.051 14:56:15 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:45.051 14:56:15 -- setup/driver.sh@32 -- # return 1 00:05:45.051 14:56:15 -- setup/driver.sh@38 -- # uio 00:05:45.051 14:56:15 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:45.051 14:56:15 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:45.051 14:56:15 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:45.051 14:56:15 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:45.051 14:56:15 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:45.051 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:45.051 14:56:15 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:45.051 14:56:15 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:45.051 14:56:15 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:45.051 Looking for driver=uio_pci_generic 00:05:45.051 14:56:15 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:45.051 14:56:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:45.051 14:56:15 -- setup/driver.sh@45 -- # setup output config 00:05:45.051 14:56:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:45.051 14:56:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:45.621 14:56:16 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:45.621 14:56:16 -- setup/driver.sh@58 -- # continue 00:05:45.621 14:56:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:45.880 14:56:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:45.880 14:56:16 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:45.880 14:56:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:45.880 14:56:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:45.880 14:56:16 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:45.880 14:56:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:45.880 14:56:16 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:45.880 14:56:16 -- setup/driver.sh@65 -- # setup reset 00:05:45.880 14:56:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:45.880 14:56:16 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:46.448 00:05:46.448 real 0m1.407s 00:05:46.448 user 0m0.555s 00:05:46.448 sys 0m0.862s 00:05:46.448 14:56:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.448 14:56:17 -- common/autotest_common.sh@10 -- # set +x 00:05:46.448 ************************************ 00:05:46.448 END TEST guess_driver 00:05:46.448 ************************************ 00:05:46.448 00:05:46.448 real 0m2.181s 00:05:46.448 user 0m0.891s 00:05:46.448 sys 0m1.367s 00:05:46.448 14:56:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:46.448 14:56:17 -- common/autotest_common.sh@10 -- # set +x 00:05:46.448 ************************************ 00:05:46.448 END TEST driver 00:05:46.448 ************************************ 00:05:46.448 14:56:17 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:46.448 14:56:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.448 14:56:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.448 14:56:17 -- common/autotest_common.sh@10 -- # set +x 00:05:46.448 ************************************ 00:05:46.448 START TEST devices 00:05:46.448 ************************************ 00:05:46.448 14:56:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:46.707 * Looking for test storage... 00:05:46.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:46.707 14:56:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:46.707 14:56:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:46.707 14:56:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:46.707 14:56:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:46.707 14:56:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:46.707 14:56:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:46.707 14:56:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:46.707 14:56:17 -- scripts/common.sh@335 -- # IFS=.-: 00:05:46.707 14:56:17 -- scripts/common.sh@335 -- # read -ra ver1 00:05:46.707 14:56:17 -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.707 14:56:17 -- scripts/common.sh@336 -- # read -ra ver2 00:05:46.707 14:56:17 -- scripts/common.sh@337 -- # local 'op=<' 00:05:46.707 14:56:17 -- scripts/common.sh@339 -- # ver1_l=2 00:05:46.707 14:56:17 -- scripts/common.sh@340 -- # ver2_l=1 00:05:46.707 14:56:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:46.707 14:56:17 -- scripts/common.sh@343 -- # case "$op" in 00:05:46.707 14:56:17 -- scripts/common.sh@344 -- # : 1 00:05:46.707 14:56:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:46.707 14:56:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.707 14:56:17 -- scripts/common.sh@364 -- # decimal 1 00:05:46.707 14:56:17 -- scripts/common.sh@352 -- # local d=1 00:05:46.707 14:56:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.707 14:56:17 -- scripts/common.sh@354 -- # echo 1 00:05:46.707 14:56:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:46.707 14:56:17 -- scripts/common.sh@365 -- # decimal 2 00:05:46.707 14:56:17 -- scripts/common.sh@352 -- # local d=2 00:05:46.707 14:56:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.707 14:56:17 -- scripts/common.sh@354 -- # echo 2 00:05:46.707 14:56:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:46.707 14:56:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:46.707 14:56:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:46.707 14:56:17 -- scripts/common.sh@367 -- # return 0 00:05:46.707 14:56:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.707 14:56:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:46.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.707 --rc genhtml_branch_coverage=1 00:05:46.707 --rc genhtml_function_coverage=1 00:05:46.707 --rc genhtml_legend=1 00:05:46.707 --rc geninfo_all_blocks=1 00:05:46.707 --rc geninfo_unexecuted_blocks=1 00:05:46.707 00:05:46.707 ' 00:05:46.707 14:56:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:46.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.707 --rc genhtml_branch_coverage=1 00:05:46.707 --rc genhtml_function_coverage=1 00:05:46.707 --rc genhtml_legend=1 00:05:46.707 --rc geninfo_all_blocks=1 00:05:46.707 --rc geninfo_unexecuted_blocks=1 00:05:46.707 00:05:46.707 ' 00:05:46.707 14:56:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:46.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.707 --rc genhtml_branch_coverage=1 00:05:46.707 --rc genhtml_function_coverage=1 00:05:46.707 --rc genhtml_legend=1 00:05:46.707 --rc geninfo_all_blocks=1 00:05:46.707 --rc geninfo_unexecuted_blocks=1 00:05:46.707 00:05:46.707 ' 00:05:46.707 14:56:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:46.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.707 --rc genhtml_branch_coverage=1 00:05:46.707 --rc genhtml_function_coverage=1 00:05:46.707 --rc genhtml_legend=1 00:05:46.707 --rc geninfo_all_blocks=1 00:05:46.707 --rc geninfo_unexecuted_blocks=1 00:05:46.707 00:05:46.707 ' 00:05:46.707 14:56:17 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:46.707 14:56:17 -- setup/devices.sh@192 -- # setup reset 00:05:46.707 14:56:17 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:46.707 14:56:17 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:47.644 14:56:18 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:47.644 14:56:18 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:47.644 14:56:18 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:47.644 14:56:18 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:47.644 14:56:18 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:47.644 14:56:18 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:47.644 14:56:18 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:47.644 14:56:18 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:47.644 14:56:18 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:47.644 14:56:18 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:47.644 14:56:18 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:05:47.644 14:56:18 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:05:47.644 14:56:18 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:47.644 14:56:18 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:47.644 14:56:18 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:47.644 14:56:18 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:05:47.644 14:56:18 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:05:47.644 14:56:18 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:47.644 14:56:18 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:47.644 14:56:18 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:47.644 14:56:18 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:05:47.644 14:56:18 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:05:47.644 14:56:18 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:47.644 14:56:18 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:47.644 14:56:18 -- setup/devices.sh@196 -- # blocks=() 00:05:47.644 14:56:18 -- setup/devices.sh@196 -- # declare -a blocks 00:05:47.644 14:56:18 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:47.644 14:56:18 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:47.644 14:56:18 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:47.644 14:56:18 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:47.644 14:56:18 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:47.644 14:56:18 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:47.644 14:56:18 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:47.644 14:56:18 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:47.644 14:56:18 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:47.644 14:56:18 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:47.644 14:56:18 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:47.644 No valid GPT data, bailing 00:05:47.644 14:56:18 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:47.644 14:56:18 -- scripts/common.sh@393 -- # pt= 00:05:47.644 14:56:18 -- scripts/common.sh@394 -- # return 1 00:05:47.644 14:56:18 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:47.644 14:56:18 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:47.644 14:56:18 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:47.644 14:56:18 -- setup/common.sh@80 -- # echo 5368709120 00:05:47.644 14:56:18 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:47.644 14:56:18 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:47.644 14:56:18 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:47.644 14:56:18 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:47.644 14:56:18 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:47.644 14:56:18 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:47.644 14:56:18 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:47.644 14:56:18 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:47.644 14:56:18 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:47.644 14:56:18 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:47.644 14:56:18 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:47.644 No valid GPT data, bailing 00:05:47.644 14:56:18 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:47.644 14:56:18 -- scripts/common.sh@393 -- # pt= 00:05:47.644 14:56:18 -- scripts/common.sh@394 -- # return 1 00:05:47.644 14:56:18 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:47.644 14:56:18 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:47.644 14:56:18 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:47.644 14:56:18 -- setup/common.sh@80 -- # echo 4294967296 00:05:47.644 14:56:18 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:47.644 14:56:18 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:47.644 14:56:18 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:47.644 14:56:18 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:47.644 14:56:18 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:47.644 14:56:18 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:47.644 14:56:18 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:47.644 14:56:18 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:47.645 14:56:18 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:47.645 14:56:18 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:47.645 14:56:18 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:47.645 No valid GPT data, bailing 00:05:47.645 14:56:18 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:47.645 14:56:18 -- scripts/common.sh@393 -- # pt= 00:05:47.645 14:56:18 -- scripts/common.sh@394 -- # return 1 00:05:47.645 14:56:18 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:47.645 14:56:18 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:47.645 14:56:18 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:47.645 14:56:18 -- setup/common.sh@80 -- # echo 4294967296 00:05:47.645 14:56:18 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:47.645 14:56:18 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:47.645 14:56:18 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:47.645 14:56:18 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:47.645 14:56:18 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:47.645 14:56:18 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:47.645 14:56:18 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:47.645 14:56:18 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:47.645 14:56:18 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:47.645 14:56:18 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:47.645 14:56:18 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:47.903 No valid GPT data, bailing 00:05:47.903 14:56:18 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:47.903 14:56:18 -- scripts/common.sh@393 -- # pt= 00:05:47.903 14:56:18 -- scripts/common.sh@394 -- # return 1 00:05:47.903 14:56:18 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:47.903 14:56:18 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:47.903 14:56:18 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:47.903 14:56:18 -- setup/common.sh@80 -- # echo 4294967296 00:05:47.903 14:56:18 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:47.903 14:56:18 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:47.903 14:56:18 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:47.903 14:56:18 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:47.903 14:56:18 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:47.903 14:56:18 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:47.903 14:56:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:47.903 14:56:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:47.903 14:56:18 -- common/autotest_common.sh@10 -- # set +x 00:05:47.903 ************************************ 00:05:47.903 START TEST nvme_mount 00:05:47.903 ************************************ 00:05:47.903 14:56:18 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:47.903 14:56:18 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:47.903 14:56:18 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:47.903 14:56:18 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.903 14:56:18 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:47.903 14:56:18 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:47.903 14:56:18 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:47.903 14:56:18 -- setup/common.sh@40 -- # local part_no=1 00:05:47.903 14:56:18 -- setup/common.sh@41 -- # local size=1073741824 00:05:47.903 14:56:18 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:47.903 14:56:18 -- setup/common.sh@44 -- # parts=() 00:05:47.903 14:56:18 -- setup/common.sh@44 -- # local parts 00:05:47.903 14:56:18 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:47.903 14:56:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:47.903 14:56:18 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:47.903 14:56:18 -- setup/common.sh@46 -- # (( part++ )) 00:05:47.903 14:56:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:47.903 14:56:18 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:47.903 14:56:18 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:47.903 14:56:18 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:48.840 Creating new GPT entries in memory. 00:05:48.840 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:48.840 other utilities. 00:05:48.840 14:56:19 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:48.840 14:56:19 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:48.840 14:56:19 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:48.840 14:56:19 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:48.840 14:56:19 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:49.776 Creating new GPT entries in memory. 00:05:49.776 The operation has completed successfully. 00:05:49.776 14:56:20 -- setup/common.sh@57 -- # (( part++ )) 00:05:49.776 14:56:20 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:49.776 14:56:20 -- setup/common.sh@62 -- # wait 63903 00:05:49.776 14:56:20 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.776 14:56:20 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:49.776 14:56:20 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:49.776 14:56:20 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:49.776 14:56:20 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:50.035 14:56:20 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:50.035 14:56:20 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:50.035 14:56:20 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:50.035 14:56:20 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:50.035 14:56:20 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:50.035 14:56:20 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:50.035 14:56:20 -- setup/devices.sh@53 -- # local found=0 00:05:50.035 14:56:20 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:50.035 14:56:20 -- setup/devices.sh@56 -- # : 00:05:50.035 14:56:20 -- setup/devices.sh@59 -- # local pci status 00:05:50.035 14:56:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.035 14:56:20 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:50.035 14:56:20 -- setup/devices.sh@47 -- # setup output config 00:05:50.035 14:56:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:50.035 14:56:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:50.035 14:56:20 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:50.035 14:56:20 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:50.035 14:56:20 -- setup/devices.sh@63 -- # found=1 00:05:50.035 14:56:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.035 14:56:20 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:50.035 14:56:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.294 14:56:21 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:50.294 14:56:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.552 14:56:21 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:50.552 14:56:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.552 14:56:21 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:50.552 14:56:21 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:50.552 14:56:21 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:50.552 14:56:21 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:50.552 14:56:21 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:50.552 14:56:21 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:50.552 14:56:21 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:50.552 14:56:21 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:50.552 14:56:21 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:50.552 14:56:21 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:50.552 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:50.552 14:56:21 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:50.552 14:56:21 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:50.812 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:50.812 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:50.812 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:50.812 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:50.812 14:56:21 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:50.812 14:56:21 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:50.812 14:56:21 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:50.812 14:56:21 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:50.812 14:56:21 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:50.812 14:56:21 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:50.812 14:56:21 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:50.812 14:56:21 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:50.812 14:56:21 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:50.812 14:56:21 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:50.812 14:56:21 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:50.812 14:56:21 -- setup/devices.sh@53 -- # local found=0 00:05:50.812 14:56:21 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:50.812 14:56:21 -- setup/devices.sh@56 -- # : 00:05:50.812 14:56:21 -- setup/devices.sh@59 -- # local pci status 00:05:50.812 14:56:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:50.812 14:56:21 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:50.812 14:56:21 -- setup/devices.sh@47 -- # setup output config 00:05:50.812 14:56:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:50.812 14:56:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:51.072 14:56:21 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:51.072 14:56:21 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:51.072 14:56:21 -- setup/devices.sh@63 -- # found=1 00:05:51.072 14:56:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.072 14:56:21 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:51.072 14:56:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.331 14:56:21 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:51.331 14:56:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.331 14:56:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:51.331 14:56:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.590 14:56:22 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:51.590 14:56:22 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:51.590 14:56:22 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:51.590 14:56:22 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:51.590 14:56:22 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:51.590 14:56:22 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:51.590 14:56:22 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:51.590 14:56:22 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:51.590 14:56:22 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:51.590 14:56:22 -- setup/devices.sh@50 -- # local mount_point= 00:05:51.590 14:56:22 -- setup/devices.sh@51 -- # local test_file= 00:05:51.590 14:56:22 -- setup/devices.sh@53 -- # local found=0 00:05:51.590 14:56:22 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:51.590 14:56:22 -- setup/devices.sh@59 -- # local pci status 00:05:51.590 14:56:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.590 14:56:22 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:51.590 14:56:22 -- setup/devices.sh@47 -- # setup output config 00:05:51.590 14:56:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:51.590 14:56:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:51.849 14:56:22 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:51.849 14:56:22 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:51.849 14:56:22 -- setup/devices.sh@63 -- # found=1 00:05:51.849 14:56:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.849 14:56:22 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:51.849 14:56:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.108 14:56:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:52.108 14:56:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.108 14:56:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:52.108 14:56:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.108 14:56:22 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:52.108 14:56:22 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:52.108 14:56:22 -- setup/devices.sh@68 -- # return 0 00:05:52.108 14:56:22 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:52.108 14:56:22 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:52.108 14:56:22 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:52.108 14:56:22 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:52.108 14:56:22 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:52.108 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:52.108 00:05:52.108 real 0m4.412s 00:05:52.108 user 0m0.984s 00:05:52.108 sys 0m1.144s 00:05:52.108 14:56:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.108 14:56:22 -- common/autotest_common.sh@10 -- # set +x 00:05:52.108 ************************************ 00:05:52.108 END TEST nvme_mount 00:05:52.108 ************************************ 00:05:52.367 14:56:22 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:52.367 14:56:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:52.367 14:56:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.367 14:56:22 -- common/autotest_common.sh@10 -- # set +x 00:05:52.367 ************************************ 00:05:52.367 START TEST dm_mount 00:05:52.367 ************************************ 00:05:52.367 14:56:22 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:52.367 14:56:22 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:52.367 14:56:22 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:52.367 14:56:22 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:52.367 14:56:22 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:52.367 14:56:22 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:52.367 14:56:22 -- setup/common.sh@40 -- # local part_no=2 00:05:52.367 14:56:22 -- setup/common.sh@41 -- # local size=1073741824 00:05:52.367 14:56:22 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:52.367 14:56:22 -- setup/common.sh@44 -- # parts=() 00:05:52.367 14:56:22 -- setup/common.sh@44 -- # local parts 00:05:52.367 14:56:22 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:52.367 14:56:22 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:52.367 14:56:22 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:52.367 14:56:22 -- setup/common.sh@46 -- # (( part++ )) 00:05:52.367 14:56:22 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:52.367 14:56:22 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:52.367 14:56:22 -- setup/common.sh@46 -- # (( part++ )) 00:05:52.367 14:56:22 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:52.367 14:56:22 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:52.367 14:56:22 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:52.367 14:56:22 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:53.304 Creating new GPT entries in memory. 00:05:53.304 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:53.304 other utilities. 00:05:53.304 14:56:23 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:53.304 14:56:23 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:53.304 14:56:23 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:53.304 14:56:23 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:53.304 14:56:23 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:54.240 Creating new GPT entries in memory. 00:05:54.240 The operation has completed successfully. 00:05:54.240 14:56:24 -- setup/common.sh@57 -- # (( part++ )) 00:05:54.240 14:56:24 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:54.240 14:56:24 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:54.240 14:56:24 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:54.240 14:56:24 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:55.619 The operation has completed successfully. 00:05:55.619 14:56:26 -- setup/common.sh@57 -- # (( part++ )) 00:05:55.619 14:56:26 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:55.619 14:56:26 -- setup/common.sh@62 -- # wait 64358 00:05:55.619 14:56:26 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:55.619 14:56:26 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:55.619 14:56:26 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:55.619 14:56:26 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:55.619 14:56:26 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:55.619 14:56:26 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:55.619 14:56:26 -- setup/devices.sh@161 -- # break 00:05:55.619 14:56:26 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:55.619 14:56:26 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:55.619 14:56:26 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:55.619 14:56:26 -- setup/devices.sh@166 -- # dm=dm-0 00:05:55.619 14:56:26 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:55.619 14:56:26 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:55.619 14:56:26 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:55.619 14:56:26 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:55.619 14:56:26 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:55.619 14:56:26 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:55.619 14:56:26 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:55.619 14:56:26 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:55.619 14:56:26 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:55.619 14:56:26 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:55.620 14:56:26 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:55.620 14:56:26 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:55.620 14:56:26 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:55.620 14:56:26 -- setup/devices.sh@53 -- # local found=0 00:05:55.620 14:56:26 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:55.620 14:56:26 -- setup/devices.sh@56 -- # : 00:05:55.620 14:56:26 -- setup/devices.sh@59 -- # local pci status 00:05:55.620 14:56:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.620 14:56:26 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:55.620 14:56:26 -- setup/devices.sh@47 -- # setup output config 00:05:55.620 14:56:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:55.620 14:56:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:55.620 14:56:26 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:55.620 14:56:26 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:55.620 14:56:26 -- setup/devices.sh@63 -- # found=1 00:05:55.620 14:56:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.620 14:56:26 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:55.620 14:56:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.878 14:56:26 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:55.878 14:56:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:55.878 14:56:26 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:56.137 14:56:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:56.137 14:56:26 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:56.137 14:56:26 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:56.137 14:56:26 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:56.137 14:56:26 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:56.137 14:56:26 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:56.137 14:56:26 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:56.137 14:56:26 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:56.137 14:56:26 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:56.137 14:56:26 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:56.137 14:56:26 -- setup/devices.sh@50 -- # local mount_point= 00:05:56.137 14:56:26 -- setup/devices.sh@51 -- # local test_file= 00:05:56.137 14:56:26 -- setup/devices.sh@53 -- # local found=0 00:05:56.137 14:56:26 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:56.137 14:56:26 -- setup/devices.sh@59 -- # local pci status 00:05:56.137 14:56:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:56.137 14:56:26 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:56.137 14:56:26 -- setup/devices.sh@47 -- # setup output config 00:05:56.137 14:56:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:56.137 14:56:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:56.396 14:56:26 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:56.396 14:56:26 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:56.396 14:56:26 -- setup/devices.sh@63 -- # found=1 00:05:56.396 14:56:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:56.396 14:56:26 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:56.396 14:56:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:56.655 14:56:27 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:56.655 14:56:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:56.655 14:56:27 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:56.655 14:56:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:56.655 14:56:27 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:56.655 14:56:27 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:56.655 14:56:27 -- setup/devices.sh@68 -- # return 0 00:05:56.655 14:56:27 -- setup/devices.sh@187 -- # cleanup_dm 00:05:56.655 14:56:27 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:56.655 14:56:27 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:56.655 14:56:27 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:56.914 14:56:27 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:56.914 14:56:27 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:56.914 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:56.914 14:56:27 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:56.914 14:56:27 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:56.914 00:05:56.914 real 0m4.539s 00:05:56.914 user 0m0.701s 00:05:56.914 sys 0m0.791s 00:05:56.914 14:56:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.914 14:56:27 -- common/autotest_common.sh@10 -- # set +x 00:05:56.914 ************************************ 00:05:56.914 END TEST dm_mount 00:05:56.914 ************************************ 00:05:56.914 14:56:27 -- setup/devices.sh@1 -- # cleanup 00:05:56.914 14:56:27 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:56.914 14:56:27 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:56.914 14:56:27 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:56.914 14:56:27 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:56.914 14:56:27 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:56.914 14:56:27 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:57.173 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:57.173 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:57.173 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:57.173 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:57.173 14:56:27 -- setup/devices.sh@12 -- # cleanup_dm 00:05:57.173 14:56:27 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:57.173 14:56:27 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:57.173 14:56:27 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:57.173 14:56:27 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:57.173 14:56:27 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:57.173 14:56:27 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:57.173 00:05:57.173 real 0m10.605s 00:05:57.173 user 0m2.436s 00:05:57.173 sys 0m2.564s 00:05:57.173 14:56:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.173 ************************************ 00:05:57.173 END TEST devices 00:05:57.173 ************************************ 00:05:57.173 14:56:27 -- common/autotest_common.sh@10 -- # set +x 00:05:57.173 ************************************ 00:05:57.173 END TEST setup.sh 00:05:57.173 ************************************ 00:05:57.173 00:05:57.173 real 0m22.562s 00:05:57.173 user 0m7.841s 00:05:57.173 sys 0m9.061s 00:05:57.173 14:56:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:57.173 14:56:27 -- common/autotest_common.sh@10 -- # set +x 00:05:57.173 14:56:27 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:57.432 Hugepages 00:05:57.432 node hugesize free / total 00:05:57.432 node0 1048576kB 0 / 0 00:05:57.432 node0 2048kB 2048 / 2048 00:05:57.432 00:05:57.432 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:57.432 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:57.433 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:57.691 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:57.691 14:56:28 -- spdk/autotest.sh@128 -- # uname -s 00:05:57.691 14:56:28 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:57.691 14:56:28 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:57.691 14:56:28 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:58.259 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:58.259 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:58.259 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:58.519 14:56:29 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:59.455 14:56:30 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:59.455 14:56:30 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:59.455 14:56:30 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:59.455 14:56:30 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:59.455 14:56:30 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:59.455 14:56:30 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:59.455 14:56:30 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:59.455 14:56:30 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:59.455 14:56:30 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:59.455 14:56:30 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:05:59.455 14:56:30 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:59.455 14:56:30 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:59.714 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:59.973 Waiting for block devices as requested 00:05:59.973 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:59.973 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:59.973 14:56:30 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:59.973 14:56:30 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:59.974 14:56:30 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:05:59.974 14:56:30 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:59.974 14:56:30 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:59.974 14:56:30 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:59.974 14:56:30 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:59.974 14:56:30 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:59.974 14:56:30 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:59.974 14:56:30 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:59.974 14:56:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:59.974 14:56:30 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:59.974 14:56:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:59.974 14:56:30 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:05:59.974 14:56:30 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:59.974 14:56:30 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:59.974 14:56:30 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:59.974 14:56:30 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:59.974 14:56:30 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:59.974 14:56:30 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:59.974 14:56:30 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:59.974 14:56:30 -- common/autotest_common.sh@1552 -- # continue 00:05:59.974 14:56:30 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:59.974 14:56:30 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:59.974 14:56:30 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:59.974 14:56:30 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:05:59.974 14:56:30 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:59.974 14:56:30 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:59.974 14:56:30 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:06:00.232 14:56:30 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:06:00.232 14:56:30 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:06:00.232 14:56:30 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:06:00.232 14:56:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:00.232 14:56:30 -- common/autotest_common.sh@1540 -- # grep oacs 00:06:00.232 14:56:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:00.232 14:56:30 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:06:00.232 14:56:30 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:06:00.233 14:56:30 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:06:00.233 14:56:30 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:06:00.233 14:56:30 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:06:00.233 14:56:30 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:06:00.233 14:56:30 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:06:00.233 14:56:30 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:06:00.233 14:56:30 -- common/autotest_common.sh@1552 -- # continue 00:06:00.233 14:56:30 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:06:00.233 14:56:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:00.233 14:56:30 -- common/autotest_common.sh@10 -- # set +x 00:06:00.233 14:56:30 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:06:00.233 14:56:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:00.233 14:56:30 -- common/autotest_common.sh@10 -- # set +x 00:06:00.233 14:56:30 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:00.801 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:01.064 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:06:01.064 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:06:01.064 14:56:31 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:06:01.064 14:56:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.064 14:56:31 -- common/autotest_common.sh@10 -- # set +x 00:06:01.064 14:56:31 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:06:01.064 14:56:31 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:06:01.064 14:56:31 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:06:01.064 14:56:31 -- common/autotest_common.sh@1572 -- # bdfs=() 00:06:01.064 14:56:31 -- common/autotest_common.sh@1572 -- # local bdfs 00:06:01.064 14:56:31 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:06:01.064 14:56:31 -- common/autotest_common.sh@1508 -- # bdfs=() 00:06:01.064 14:56:31 -- common/autotest_common.sh@1508 -- # local bdfs 00:06:01.064 14:56:31 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:01.064 14:56:31 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:01.064 14:56:31 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:06:01.064 14:56:31 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:06:01.064 14:56:31 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:06:01.064 14:56:31 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:06:01.064 14:56:31 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:06:01.064 14:56:31 -- common/autotest_common.sh@1575 -- # device=0x0010 00:06:01.064 14:56:31 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:01.064 14:56:31 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:06:01.064 14:56:31 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:06:01.064 14:56:31 -- common/autotest_common.sh@1575 -- # device=0x0010 00:06:01.064 14:56:31 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:01.064 14:56:31 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:06:01.064 14:56:31 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:06:01.064 14:56:31 -- common/autotest_common.sh@1588 -- # return 0 00:06:01.064 14:56:31 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:06:01.064 14:56:31 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:06:01.064 14:56:31 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:06:01.064 14:56:31 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:06:01.064 14:56:31 -- spdk/autotest.sh@160 -- # timing_enter lib 00:06:01.064 14:56:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.064 14:56:31 -- common/autotest_common.sh@10 -- # set +x 00:06:01.064 14:56:31 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:01.064 14:56:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.064 14:56:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.064 14:56:31 -- common/autotest_common.sh@10 -- # set +x 00:06:01.064 ************************************ 00:06:01.064 START TEST env 00:06:01.064 ************************************ 00:06:01.064 14:56:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:01.365 * Looking for test storage... 00:06:01.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:01.365 14:56:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:01.365 14:56:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:01.365 14:56:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:01.365 14:56:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:01.365 14:56:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:01.365 14:56:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:01.365 14:56:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:01.365 14:56:32 -- scripts/common.sh@335 -- # IFS=.-: 00:06:01.365 14:56:32 -- scripts/common.sh@335 -- # read -ra ver1 00:06:01.365 14:56:32 -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.365 14:56:32 -- scripts/common.sh@336 -- # read -ra ver2 00:06:01.365 14:56:32 -- scripts/common.sh@337 -- # local 'op=<' 00:06:01.365 14:56:32 -- scripts/common.sh@339 -- # ver1_l=2 00:06:01.365 14:56:32 -- scripts/common.sh@340 -- # ver2_l=1 00:06:01.365 14:56:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:01.365 14:56:32 -- scripts/common.sh@343 -- # case "$op" in 00:06:01.365 14:56:32 -- scripts/common.sh@344 -- # : 1 00:06:01.365 14:56:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:01.365 14:56:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.365 14:56:32 -- scripts/common.sh@364 -- # decimal 1 00:06:01.365 14:56:32 -- scripts/common.sh@352 -- # local d=1 00:06:01.365 14:56:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.365 14:56:32 -- scripts/common.sh@354 -- # echo 1 00:06:01.365 14:56:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:01.365 14:56:32 -- scripts/common.sh@365 -- # decimal 2 00:06:01.365 14:56:32 -- scripts/common.sh@352 -- # local d=2 00:06:01.365 14:56:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.365 14:56:32 -- scripts/common.sh@354 -- # echo 2 00:06:01.365 14:56:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:01.365 14:56:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:01.365 14:56:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:01.365 14:56:32 -- scripts/common.sh@367 -- # return 0 00:06:01.365 14:56:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.365 14:56:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:01.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.365 --rc genhtml_branch_coverage=1 00:06:01.365 --rc genhtml_function_coverage=1 00:06:01.365 --rc genhtml_legend=1 00:06:01.365 --rc geninfo_all_blocks=1 00:06:01.365 --rc geninfo_unexecuted_blocks=1 00:06:01.365 00:06:01.365 ' 00:06:01.365 14:56:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:01.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.365 --rc genhtml_branch_coverage=1 00:06:01.365 --rc genhtml_function_coverage=1 00:06:01.365 --rc genhtml_legend=1 00:06:01.365 --rc geninfo_all_blocks=1 00:06:01.365 --rc geninfo_unexecuted_blocks=1 00:06:01.365 00:06:01.365 ' 00:06:01.365 14:56:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:01.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.365 --rc genhtml_branch_coverage=1 00:06:01.365 --rc genhtml_function_coverage=1 00:06:01.365 --rc genhtml_legend=1 00:06:01.365 --rc geninfo_all_blocks=1 00:06:01.365 --rc geninfo_unexecuted_blocks=1 00:06:01.365 00:06:01.365 ' 00:06:01.365 14:56:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:01.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.365 --rc genhtml_branch_coverage=1 00:06:01.365 --rc genhtml_function_coverage=1 00:06:01.365 --rc genhtml_legend=1 00:06:01.365 --rc geninfo_all_blocks=1 00:06:01.365 --rc geninfo_unexecuted_blocks=1 00:06:01.365 00:06:01.365 ' 00:06:01.365 14:56:32 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:01.365 14:56:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.365 14:56:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.365 14:56:32 -- common/autotest_common.sh@10 -- # set +x 00:06:01.365 ************************************ 00:06:01.365 START TEST env_memory 00:06:01.365 ************************************ 00:06:01.365 14:56:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:01.365 00:06:01.365 00:06:01.365 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.365 http://cunit.sourceforge.net/ 00:06:01.365 00:06:01.365 00:06:01.365 Suite: memory 00:06:01.365 Test: alloc and free memory map ...[2024-11-20 14:56:32.108835] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:01.365 passed 00:06:01.365 Test: mem map translation ...[2024-11-20 14:56:32.140902] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:01.365 [2024-11-20 14:56:32.141098] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:01.365 [2024-11-20 14:56:32.141399] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:01.365 [2024-11-20 14:56:32.141587] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:01.624 passed 00:06:01.624 Test: mem map registration ...[2024-11-20 14:56:32.205822] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:01.624 [2024-11-20 14:56:32.206044] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:01.624 passed 00:06:01.624 Test: mem map adjacent registrations ...passed 00:06:01.624 00:06:01.624 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.624 suites 1 1 n/a 0 0 00:06:01.624 tests 4 4 4 0 0 00:06:01.624 asserts 152 152 152 0 n/a 00:06:01.624 00:06:01.624 Elapsed time = 0.215 seconds 00:06:01.624 00:06:01.624 real 0m0.233s 00:06:01.624 user 0m0.213s 00:06:01.624 sys 0m0.015s 00:06:01.624 14:56:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:01.624 14:56:32 -- common/autotest_common.sh@10 -- # set +x 00:06:01.624 ************************************ 00:06:01.624 END TEST env_memory 00:06:01.624 ************************************ 00:06:01.624 14:56:32 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:01.624 14:56:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.624 14:56:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.624 14:56:32 -- common/autotest_common.sh@10 -- # set +x 00:06:01.624 ************************************ 00:06:01.624 START TEST env_vtophys 00:06:01.624 ************************************ 00:06:01.624 14:56:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:01.624 EAL: lib.eal log level changed from notice to debug 00:06:01.624 EAL: Detected lcore 0 as core 0 on socket 0 00:06:01.624 EAL: Detected lcore 1 as core 0 on socket 0 00:06:01.624 EAL: Detected lcore 2 as core 0 on socket 0 00:06:01.624 EAL: Detected lcore 3 as core 0 on socket 0 00:06:01.624 EAL: Detected lcore 4 as core 0 on socket 0 00:06:01.624 EAL: Detected lcore 5 as core 0 on socket 0 00:06:01.624 EAL: Detected lcore 6 as core 0 on socket 0 00:06:01.624 EAL: Detected lcore 7 as core 0 on socket 0 00:06:01.624 EAL: Detected lcore 8 as core 0 on socket 0 00:06:01.624 EAL: Detected lcore 9 as core 0 on socket 0 00:06:01.624 EAL: Maximum logical cores by configuration: 128 00:06:01.624 EAL: Detected CPU lcores: 10 00:06:01.624 EAL: Detected NUMA nodes: 1 00:06:01.624 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:06:01.624 EAL: Detected shared linkage of DPDK 00:06:01.624 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:06:01.624 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:06:01.624 EAL: Registered [vdev] bus. 00:06:01.624 EAL: bus.vdev log level changed from disabled to notice 00:06:01.624 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:06:01.624 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:06:01.624 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:01.624 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:01.624 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:06:01.624 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:06:01.624 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:06:01.624 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:06:01.624 EAL: No shared files mode enabled, IPC will be disabled 00:06:01.624 EAL: No shared files mode enabled, IPC is disabled 00:06:01.624 EAL: Selected IOVA mode 'PA' 00:06:01.624 EAL: Probing VFIO support... 00:06:01.624 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:01.624 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:01.624 EAL: Ask a virtual area of 0x2e000 bytes 00:06:01.624 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:01.624 EAL: Setting up physically contiguous memory... 00:06:01.624 EAL: Setting maximum number of open files to 524288 00:06:01.624 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:01.624 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:01.624 EAL: Ask a virtual area of 0x61000 bytes 00:06:01.624 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:01.624 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:01.624 EAL: Ask a virtual area of 0x400000000 bytes 00:06:01.624 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:01.624 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:01.624 EAL: Ask a virtual area of 0x61000 bytes 00:06:01.624 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:01.624 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:01.624 EAL: Ask a virtual area of 0x400000000 bytes 00:06:01.624 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:01.624 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:01.624 EAL: Ask a virtual area of 0x61000 bytes 00:06:01.624 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:01.624 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:01.624 EAL: Ask a virtual area of 0x400000000 bytes 00:06:01.624 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:01.624 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:01.624 EAL: Ask a virtual area of 0x61000 bytes 00:06:01.624 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:01.624 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:01.624 EAL: Ask a virtual area of 0x400000000 bytes 00:06:01.624 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:01.624 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:01.624 EAL: Hugepages will be freed exactly as allocated. 00:06:01.624 EAL: No shared files mode enabled, IPC is disabled 00:06:01.624 EAL: No shared files mode enabled, IPC is disabled 00:06:01.883 EAL: TSC frequency is ~2200000 KHz 00:06:01.883 EAL: Main lcore 0 is ready (tid=7f95a5beea00;cpuset=[0]) 00:06:01.883 EAL: Trying to obtain current memory policy. 00:06:01.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.883 EAL: Restoring previous memory policy: 0 00:06:01.883 EAL: request: mp_malloc_sync 00:06:01.883 EAL: No shared files mode enabled, IPC is disabled 00:06:01.883 EAL: Heap on socket 0 was expanded by 2MB 00:06:01.883 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:01.883 EAL: No shared files mode enabled, IPC is disabled 00:06:01.883 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:01.883 EAL: Mem event callback 'spdk:(nil)' registered 00:06:01.883 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:01.883 00:06:01.883 00:06:01.883 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.883 http://cunit.sourceforge.net/ 00:06:01.883 00:06:01.883 00:06:01.883 Suite: components_suite 00:06:01.883 Test: vtophys_malloc_test ...passed 00:06:01.883 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:01.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.883 EAL: Restoring previous memory policy: 4 00:06:01.883 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.883 EAL: request: mp_malloc_sync 00:06:01.883 EAL: No shared files mode enabled, IPC is disabled 00:06:01.883 EAL: Heap on socket 0 was expanded by 4MB 00:06:01.883 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.883 EAL: request: mp_malloc_sync 00:06:01.883 EAL: No shared files mode enabled, IPC is disabled 00:06:01.883 EAL: Heap on socket 0 was shrunk by 4MB 00:06:01.883 EAL: Trying to obtain current memory policy. 00:06:01.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.883 EAL: Restoring previous memory policy: 4 00:06:01.883 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.883 EAL: request: mp_malloc_sync 00:06:01.883 EAL: No shared files mode enabled, IPC is disabled 00:06:01.883 EAL: Heap on socket 0 was expanded by 6MB 00:06:01.883 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.883 EAL: request: mp_malloc_sync 00:06:01.883 EAL: No shared files mode enabled, IPC is disabled 00:06:01.883 EAL: Heap on socket 0 was shrunk by 6MB 00:06:01.883 EAL: Trying to obtain current memory policy. 00:06:01.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.883 EAL: Restoring previous memory policy: 4 00:06:01.883 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.883 EAL: request: mp_malloc_sync 00:06:01.883 EAL: No shared files mode enabled, IPC is disabled 00:06:01.883 EAL: Heap on socket 0 was expanded by 10MB 00:06:01.883 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.883 EAL: request: mp_malloc_sync 00:06:01.883 EAL: No shared files mode enabled, IPC is disabled 00:06:01.883 EAL: Heap on socket 0 was shrunk by 10MB 00:06:01.883 EAL: Trying to obtain current memory policy. 00:06:01.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.883 EAL: Restoring previous memory policy: 4 00:06:01.883 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.883 EAL: request: mp_malloc_sync 00:06:01.883 EAL: No shared files mode enabled, IPC is disabled 00:06:01.883 EAL: Heap on socket 0 was expanded by 18MB 00:06:01.883 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.883 EAL: request: mp_malloc_sync 00:06:01.883 EAL: No shared files mode enabled, IPC is disabled 00:06:01.883 EAL: Heap on socket 0 was shrunk by 18MB 00:06:01.883 EAL: Trying to obtain current memory policy. 00:06:01.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.883 EAL: Restoring previous memory policy: 4 00:06:01.883 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.883 EAL: request: mp_malloc_sync 00:06:01.883 EAL: No shared files mode enabled, IPC is disabled 00:06:01.883 EAL: Heap on socket 0 was expanded by 34MB 00:06:01.883 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.883 EAL: request: mp_malloc_sync 00:06:01.883 EAL: No shared files mode enabled, IPC is disabled 00:06:01.883 EAL: Heap on socket 0 was shrunk by 34MB 00:06:01.883 EAL: Trying to obtain current memory policy. 00:06:01.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.883 EAL: Restoring previous memory policy: 4 00:06:01.883 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.883 EAL: request: mp_malloc_sync 00:06:01.883 EAL: No shared files mode enabled, IPC is disabled 00:06:01.883 EAL: Heap on socket 0 was expanded by 66MB 00:06:01.883 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.883 EAL: request: mp_malloc_sync 00:06:01.883 EAL: No shared files mode enabled, IPC is disabled 00:06:01.883 EAL: Heap on socket 0 was shrunk by 66MB 00:06:01.883 EAL: Trying to obtain current memory policy. 00:06:01.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.883 EAL: Restoring previous memory policy: 4 00:06:01.883 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.883 EAL: request: mp_malloc_sync 00:06:01.883 EAL: No shared files mode enabled, IPC is disabled 00:06:01.883 EAL: Heap on socket 0 was expanded by 130MB 00:06:01.883 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.883 EAL: request: mp_malloc_sync 00:06:01.883 EAL: No shared files mode enabled, IPC is disabled 00:06:01.883 EAL: Heap on socket 0 was shrunk by 130MB 00:06:01.883 EAL: Trying to obtain current memory policy. 00:06:01.883 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.883 EAL: Restoring previous memory policy: 4 00:06:01.883 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.883 EAL: request: mp_malloc_sync 00:06:01.883 EAL: No shared files mode enabled, IPC is disabled 00:06:01.883 EAL: Heap on socket 0 was expanded by 258MB 00:06:01.883 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.142 EAL: request: mp_malloc_sync 00:06:02.142 EAL: No shared files mode enabled, IPC is disabled 00:06:02.142 EAL: Heap on socket 0 was shrunk by 258MB 00:06:02.142 EAL: Trying to obtain current memory policy. 00:06:02.142 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.142 EAL: Restoring previous memory policy: 4 00:06:02.142 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.142 EAL: request: mp_malloc_sync 00:06:02.142 EAL: No shared files mode enabled, IPC is disabled 00:06:02.142 EAL: Heap on socket 0 was expanded by 514MB 00:06:02.142 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.142 EAL: request: mp_malloc_sync 00:06:02.142 EAL: No shared files mode enabled, IPC is disabled 00:06:02.142 EAL: Heap on socket 0 was shrunk by 514MB 00:06:02.142 EAL: Trying to obtain current memory policy. 00:06:02.142 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.401 EAL: Restoring previous memory policy: 4 00:06:02.401 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.401 EAL: request: mp_malloc_sync 00:06:02.401 EAL: No shared files mode enabled, IPC is disabled 00:06:02.401 EAL: Heap on socket 0 was expanded by 1026MB 00:06:02.401 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.660 EAL: request: mp_malloc_sync 00:06:02.660 EAL: No shared files mode enabled, IPC is disabled 00:06:02.660 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:02.660 passed 00:06:02.660 00:06:02.660 Run Summary: Type Total Ran Passed Failed Inactive 00:06:02.660 suites 1 1 n/a 0 0 00:06:02.660 tests 2 2 2 0 0 00:06:02.660 asserts 5134 5134 5134 0 n/a 00:06:02.660 00:06:02.660 Elapsed time = 0.705 seconds 00:06:02.660 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.660 EAL: request: mp_malloc_sync 00:06:02.660 EAL: No shared files mode enabled, IPC is disabled 00:06:02.660 EAL: Heap on socket 0 was shrunk by 2MB 00:06:02.660 EAL: No shared files mode enabled, IPC is disabled 00:06:02.660 EAL: No shared files mode enabled, IPC is disabled 00:06:02.660 EAL: No shared files mode enabled, IPC is disabled 00:06:02.660 00:06:02.660 real 0m0.897s 00:06:02.660 user 0m0.457s 00:06:02.660 sys 0m0.308s 00:06:02.660 14:56:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.660 14:56:33 -- common/autotest_common.sh@10 -- # set +x 00:06:02.660 ************************************ 00:06:02.660 END TEST env_vtophys 00:06:02.660 ************************************ 00:06:02.660 14:56:33 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:02.660 14:56:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:02.660 14:56:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.660 14:56:33 -- common/autotest_common.sh@10 -- # set +x 00:06:02.660 ************************************ 00:06:02.660 START TEST env_pci 00:06:02.660 ************************************ 00:06:02.660 14:56:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:02.660 00:06:02.660 00:06:02.660 CUnit - A unit testing framework for C - Version 2.1-3 00:06:02.660 http://cunit.sourceforge.net/ 00:06:02.660 00:06:02.660 00:06:02.660 Suite: pci 00:06:02.660 Test: pci_hook ...[2024-11-20 14:56:33.307053] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 65491 has claimed it 00:06:02.660 passed 00:06:02.660 00:06:02.660 Run Summary: Type Total Ran Passed Failed Inactive 00:06:02.660 suites 1 1 n/a 0 0 00:06:02.660 tests 1 1 1 0 0 00:06:02.660 asserts 25 25 25 0 n/a 00:06:02.660 00:06:02.660 Elapsed time = 0.002 seconds 00:06:02.660 EAL: Cannot find device (10000:00:01.0) 00:06:02.660 EAL: Failed to attach device on primary process 00:06:02.660 00:06:02.660 real 0m0.018s 00:06:02.660 user 0m0.009s 00:06:02.660 sys 0m0.009s 00:06:02.660 14:56:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.660 14:56:33 -- common/autotest_common.sh@10 -- # set +x 00:06:02.660 ************************************ 00:06:02.660 END TEST env_pci 00:06:02.660 ************************************ 00:06:02.660 14:56:33 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:02.661 14:56:33 -- env/env.sh@15 -- # uname 00:06:02.661 14:56:33 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:02.661 14:56:33 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:02.661 14:56:33 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:02.661 14:56:33 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:06:02.661 14:56:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.661 14:56:33 -- common/autotest_common.sh@10 -- # set +x 00:06:02.661 ************************************ 00:06:02.661 START TEST env_dpdk_post_init 00:06:02.661 ************************************ 00:06:02.661 14:56:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:02.661 EAL: Detected CPU lcores: 10 00:06:02.661 EAL: Detected NUMA nodes: 1 00:06:02.661 EAL: Detected shared linkage of DPDK 00:06:02.661 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:02.661 EAL: Selected IOVA mode 'PA' 00:06:02.920 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:02.920 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:06:02.920 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:06:02.920 Starting DPDK initialization... 00:06:02.920 Starting SPDK post initialization... 00:06:02.920 SPDK NVMe probe 00:06:02.920 Attaching to 0000:00:06.0 00:06:02.920 Attaching to 0000:00:07.0 00:06:02.920 Attached to 0000:00:06.0 00:06:02.920 Attached to 0000:00:07.0 00:06:02.920 Cleaning up... 00:06:02.920 00:06:02.920 real 0m0.173s 00:06:02.920 user 0m0.040s 00:06:02.920 sys 0m0.033s 00:06:02.920 14:56:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:02.920 14:56:33 -- common/autotest_common.sh@10 -- # set +x 00:06:02.920 ************************************ 00:06:02.920 END TEST env_dpdk_post_init 00:06:02.920 ************************************ 00:06:02.920 14:56:33 -- env/env.sh@26 -- # uname 00:06:02.920 14:56:33 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:02.920 14:56:33 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:02.920 14:56:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:02.920 14:56:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.920 14:56:33 -- common/autotest_common.sh@10 -- # set +x 00:06:02.920 ************************************ 00:06:02.920 START TEST env_mem_callbacks 00:06:02.920 ************************************ 00:06:02.920 14:56:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:02.920 EAL: Detected CPU lcores: 10 00:06:02.920 EAL: Detected NUMA nodes: 1 00:06:02.920 EAL: Detected shared linkage of DPDK 00:06:02.920 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:02.920 EAL: Selected IOVA mode 'PA' 00:06:03.179 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:03.179 00:06:03.179 00:06:03.179 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.179 http://cunit.sourceforge.net/ 00:06:03.179 00:06:03.179 00:06:03.179 Suite: memory 00:06:03.179 Test: test ... 00:06:03.179 register 0x200000200000 2097152 00:06:03.179 malloc 3145728 00:06:03.179 register 0x200000400000 4194304 00:06:03.179 buf 0x200000500000 len 3145728 PASSED 00:06:03.179 malloc 64 00:06:03.179 buf 0x2000004fff40 len 64 PASSED 00:06:03.179 malloc 4194304 00:06:03.179 register 0x200000800000 6291456 00:06:03.179 buf 0x200000a00000 len 4194304 PASSED 00:06:03.179 free 0x200000500000 3145728 00:06:03.179 free 0x2000004fff40 64 00:06:03.179 unregister 0x200000400000 4194304 PASSED 00:06:03.179 free 0x200000a00000 4194304 00:06:03.179 unregister 0x200000800000 6291456 PASSED 00:06:03.179 malloc 8388608 00:06:03.179 register 0x200000400000 10485760 00:06:03.179 buf 0x200000600000 len 8388608 PASSED 00:06:03.179 free 0x200000600000 8388608 00:06:03.179 unregister 0x200000400000 10485760 PASSED 00:06:03.179 passed 00:06:03.179 00:06:03.179 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.179 suites 1 1 n/a 0 0 00:06:03.179 tests 1 1 1 0 0 00:06:03.179 asserts 15 15 15 0 n/a 00:06:03.179 00:06:03.179 Elapsed time = 0.008 seconds 00:06:03.179 00:06:03.179 real 0m0.140s 00:06:03.179 user 0m0.024s 00:06:03.179 sys 0m0.016s 00:06:03.179 14:56:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.179 14:56:33 -- common/autotest_common.sh@10 -- # set +x 00:06:03.179 ************************************ 00:06:03.179 END TEST env_mem_callbacks 00:06:03.179 ************************************ 00:06:03.179 00:06:03.179 real 0m1.915s 00:06:03.179 user 0m0.955s 00:06:03.179 sys 0m0.606s 00:06:03.179 14:56:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.179 14:56:33 -- common/autotest_common.sh@10 -- # set +x 00:06:03.179 ************************************ 00:06:03.179 END TEST env 00:06:03.179 ************************************ 00:06:03.179 14:56:33 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:03.179 14:56:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:03.179 14:56:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.179 14:56:33 -- common/autotest_common.sh@10 -- # set +x 00:06:03.179 ************************************ 00:06:03.179 START TEST rpc 00:06:03.179 ************************************ 00:06:03.179 14:56:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:03.179 * Looking for test storage... 00:06:03.179 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:03.179 14:56:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:03.179 14:56:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:03.179 14:56:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:03.439 14:56:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:03.439 14:56:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:03.439 14:56:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:03.439 14:56:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:03.439 14:56:33 -- scripts/common.sh@335 -- # IFS=.-: 00:06:03.439 14:56:33 -- scripts/common.sh@335 -- # read -ra ver1 00:06:03.439 14:56:33 -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.439 14:56:33 -- scripts/common.sh@336 -- # read -ra ver2 00:06:03.439 14:56:33 -- scripts/common.sh@337 -- # local 'op=<' 00:06:03.439 14:56:33 -- scripts/common.sh@339 -- # ver1_l=2 00:06:03.439 14:56:33 -- scripts/common.sh@340 -- # ver2_l=1 00:06:03.439 14:56:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:03.439 14:56:33 -- scripts/common.sh@343 -- # case "$op" in 00:06:03.439 14:56:33 -- scripts/common.sh@344 -- # : 1 00:06:03.439 14:56:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:03.439 14:56:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.439 14:56:33 -- scripts/common.sh@364 -- # decimal 1 00:06:03.439 14:56:33 -- scripts/common.sh@352 -- # local d=1 00:06:03.439 14:56:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.439 14:56:33 -- scripts/common.sh@354 -- # echo 1 00:06:03.439 14:56:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:03.439 14:56:33 -- scripts/common.sh@365 -- # decimal 2 00:06:03.439 14:56:34 -- scripts/common.sh@352 -- # local d=2 00:06:03.439 14:56:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.439 14:56:34 -- scripts/common.sh@354 -- # echo 2 00:06:03.439 14:56:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:03.439 14:56:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:03.439 14:56:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:03.439 14:56:34 -- scripts/common.sh@367 -- # return 0 00:06:03.439 14:56:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.439 14:56:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:03.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.439 --rc genhtml_branch_coverage=1 00:06:03.439 --rc genhtml_function_coverage=1 00:06:03.439 --rc genhtml_legend=1 00:06:03.439 --rc geninfo_all_blocks=1 00:06:03.439 --rc geninfo_unexecuted_blocks=1 00:06:03.439 00:06:03.439 ' 00:06:03.439 14:56:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:03.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.439 --rc genhtml_branch_coverage=1 00:06:03.439 --rc genhtml_function_coverage=1 00:06:03.439 --rc genhtml_legend=1 00:06:03.439 --rc geninfo_all_blocks=1 00:06:03.439 --rc geninfo_unexecuted_blocks=1 00:06:03.439 00:06:03.439 ' 00:06:03.439 14:56:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:03.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.439 --rc genhtml_branch_coverage=1 00:06:03.439 --rc genhtml_function_coverage=1 00:06:03.439 --rc genhtml_legend=1 00:06:03.439 --rc geninfo_all_blocks=1 00:06:03.439 --rc geninfo_unexecuted_blocks=1 00:06:03.439 00:06:03.439 ' 00:06:03.439 14:56:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:03.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.439 --rc genhtml_branch_coverage=1 00:06:03.439 --rc genhtml_function_coverage=1 00:06:03.439 --rc genhtml_legend=1 00:06:03.439 --rc geninfo_all_blocks=1 00:06:03.439 --rc geninfo_unexecuted_blocks=1 00:06:03.439 00:06:03.439 ' 00:06:03.439 14:56:34 -- rpc/rpc.sh@65 -- # spdk_pid=65613 00:06:03.439 14:56:34 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.439 14:56:34 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:03.439 14:56:34 -- rpc/rpc.sh@67 -- # waitforlisten 65613 00:06:03.439 14:56:34 -- common/autotest_common.sh@829 -- # '[' -z 65613 ']' 00:06:03.439 14:56:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.439 14:56:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.439 14:56:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.439 14:56:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.439 14:56:34 -- common/autotest_common.sh@10 -- # set +x 00:06:03.439 [2024-11-20 14:56:34.068089] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:03.439 [2024-11-20 14:56:34.068200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65613 ] 00:06:03.439 [2024-11-20 14:56:34.202860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.699 [2024-11-20 14:56:34.244686] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:03.699 [2024-11-20 14:56:34.244866] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:03.699 [2024-11-20 14:56:34.244888] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 65613' to capture a snapshot of events at runtime. 00:06:03.699 [2024-11-20 14:56:34.244899] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid65613 for offline analysis/debug. 00:06:03.699 [2024-11-20 14:56:34.244937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.634 14:56:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.634 14:56:35 -- common/autotest_common.sh@862 -- # return 0 00:06:04.634 14:56:35 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:04.634 14:56:35 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:04.634 14:56:35 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:04.634 14:56:35 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:04.634 14:56:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.634 14:56:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.634 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:06:04.634 ************************************ 00:06:04.634 START TEST rpc_integrity 00:06:04.634 ************************************ 00:06:04.634 14:56:35 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:06:04.634 14:56:35 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:04.634 14:56:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.634 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:06:04.634 14:56:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.634 14:56:35 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:04.634 14:56:35 -- rpc/rpc.sh@13 -- # jq length 00:06:04.634 14:56:35 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:04.634 14:56:35 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:04.634 14:56:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.634 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:06:04.634 14:56:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.634 14:56:35 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:04.634 14:56:35 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:04.634 14:56:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.634 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:06:04.634 14:56:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.634 14:56:35 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:04.634 { 00:06:04.634 "name": "Malloc0", 00:06:04.634 "aliases": [ 00:06:04.634 "bf050706-a9ed-4a10-a4b0-97d4a10b4c63" 00:06:04.634 ], 00:06:04.634 "product_name": "Malloc disk", 00:06:04.634 "block_size": 512, 00:06:04.634 "num_blocks": 16384, 00:06:04.634 "uuid": "bf050706-a9ed-4a10-a4b0-97d4a10b4c63", 00:06:04.634 "assigned_rate_limits": { 00:06:04.634 "rw_ios_per_sec": 0, 00:06:04.634 "rw_mbytes_per_sec": 0, 00:06:04.634 "r_mbytes_per_sec": 0, 00:06:04.634 "w_mbytes_per_sec": 0 00:06:04.634 }, 00:06:04.635 "claimed": false, 00:06:04.635 "zoned": false, 00:06:04.635 "supported_io_types": { 00:06:04.635 "read": true, 00:06:04.635 "write": true, 00:06:04.635 "unmap": true, 00:06:04.635 "write_zeroes": true, 00:06:04.635 "flush": true, 00:06:04.635 "reset": true, 00:06:04.635 "compare": false, 00:06:04.635 "compare_and_write": false, 00:06:04.635 "abort": true, 00:06:04.635 "nvme_admin": false, 00:06:04.635 "nvme_io": false 00:06:04.635 }, 00:06:04.635 "memory_domains": [ 00:06:04.635 { 00:06:04.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.635 "dma_device_type": 2 00:06:04.635 } 00:06:04.635 ], 00:06:04.635 "driver_specific": {} 00:06:04.635 } 00:06:04.635 ]' 00:06:04.635 14:56:35 -- rpc/rpc.sh@17 -- # jq length 00:06:04.635 14:56:35 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:04.635 14:56:35 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:04.635 14:56:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.635 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:06:04.635 [2024-11-20 14:56:35.269322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:04.635 [2024-11-20 14:56:35.269386] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:04.635 [2024-11-20 14:56:35.269404] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf2e030 00:06:04.635 [2024-11-20 14:56:35.269414] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:04.635 [2024-11-20 14:56:35.270918] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:04.635 [2024-11-20 14:56:35.270955] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:04.635 Passthru0 00:06:04.635 14:56:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.635 14:56:35 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:04.635 14:56:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.635 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:06:04.635 14:56:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.635 14:56:35 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:04.635 { 00:06:04.635 "name": "Malloc0", 00:06:04.635 "aliases": [ 00:06:04.635 "bf050706-a9ed-4a10-a4b0-97d4a10b4c63" 00:06:04.635 ], 00:06:04.635 "product_name": "Malloc disk", 00:06:04.635 "block_size": 512, 00:06:04.635 "num_blocks": 16384, 00:06:04.635 "uuid": "bf050706-a9ed-4a10-a4b0-97d4a10b4c63", 00:06:04.635 "assigned_rate_limits": { 00:06:04.635 "rw_ios_per_sec": 0, 00:06:04.635 "rw_mbytes_per_sec": 0, 00:06:04.635 "r_mbytes_per_sec": 0, 00:06:04.635 "w_mbytes_per_sec": 0 00:06:04.635 }, 00:06:04.635 "claimed": true, 00:06:04.635 "claim_type": "exclusive_write", 00:06:04.635 "zoned": false, 00:06:04.635 "supported_io_types": { 00:06:04.635 "read": true, 00:06:04.635 "write": true, 00:06:04.635 "unmap": true, 00:06:04.635 "write_zeroes": true, 00:06:04.635 "flush": true, 00:06:04.635 "reset": true, 00:06:04.635 "compare": false, 00:06:04.635 "compare_and_write": false, 00:06:04.635 "abort": true, 00:06:04.635 "nvme_admin": false, 00:06:04.635 "nvme_io": false 00:06:04.635 }, 00:06:04.635 "memory_domains": [ 00:06:04.635 { 00:06:04.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.635 "dma_device_type": 2 00:06:04.635 } 00:06:04.635 ], 00:06:04.635 "driver_specific": {} 00:06:04.635 }, 00:06:04.635 { 00:06:04.635 "name": "Passthru0", 00:06:04.635 "aliases": [ 00:06:04.635 "619f08bb-8dc6-520f-b7ef-ea85389504f5" 00:06:04.635 ], 00:06:04.635 "product_name": "passthru", 00:06:04.635 "block_size": 512, 00:06:04.635 "num_blocks": 16384, 00:06:04.635 "uuid": "619f08bb-8dc6-520f-b7ef-ea85389504f5", 00:06:04.635 "assigned_rate_limits": { 00:06:04.635 "rw_ios_per_sec": 0, 00:06:04.635 "rw_mbytes_per_sec": 0, 00:06:04.635 "r_mbytes_per_sec": 0, 00:06:04.635 "w_mbytes_per_sec": 0 00:06:04.635 }, 00:06:04.635 "claimed": false, 00:06:04.635 "zoned": false, 00:06:04.635 "supported_io_types": { 00:06:04.635 "read": true, 00:06:04.635 "write": true, 00:06:04.635 "unmap": true, 00:06:04.635 "write_zeroes": true, 00:06:04.635 "flush": true, 00:06:04.635 "reset": true, 00:06:04.635 "compare": false, 00:06:04.635 "compare_and_write": false, 00:06:04.635 "abort": true, 00:06:04.635 "nvme_admin": false, 00:06:04.635 "nvme_io": false 00:06:04.635 }, 00:06:04.635 "memory_domains": [ 00:06:04.635 { 00:06:04.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.635 "dma_device_type": 2 00:06:04.635 } 00:06:04.635 ], 00:06:04.635 "driver_specific": { 00:06:04.635 "passthru": { 00:06:04.635 "name": "Passthru0", 00:06:04.635 "base_bdev_name": "Malloc0" 00:06:04.635 } 00:06:04.635 } 00:06:04.635 } 00:06:04.635 ]' 00:06:04.635 14:56:35 -- rpc/rpc.sh@21 -- # jq length 00:06:04.635 14:56:35 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:04.635 14:56:35 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:04.635 14:56:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.635 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:06:04.635 14:56:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.635 14:56:35 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:04.635 14:56:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.635 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:06:04.635 14:56:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.635 14:56:35 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:04.635 14:56:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.635 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:06:04.635 14:56:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.635 14:56:35 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:04.635 14:56:35 -- rpc/rpc.sh@26 -- # jq length 00:06:04.895 14:56:35 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:04.895 00:06:04.895 real 0m0.325s 00:06:04.895 user 0m0.218s 00:06:04.895 sys 0m0.035s 00:06:04.895 14:56:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.895 ************************************ 00:06:04.895 END TEST rpc_integrity 00:06:04.895 ************************************ 00:06:04.895 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:06:04.895 14:56:35 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:04.895 14:56:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.895 14:56:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.895 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:06:04.895 ************************************ 00:06:04.895 START TEST rpc_plugins 00:06:04.895 ************************************ 00:06:04.895 14:56:35 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:06:04.895 14:56:35 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:04.895 14:56:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.895 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:06:04.895 14:56:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.895 14:56:35 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:04.895 14:56:35 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:04.895 14:56:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.895 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:06:04.895 14:56:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.895 14:56:35 -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:04.895 { 00:06:04.895 "name": "Malloc1", 00:06:04.895 "aliases": [ 00:06:04.895 "6ecb5d86-93a8-46a2-ae00-66736736e73d" 00:06:04.895 ], 00:06:04.895 "product_name": "Malloc disk", 00:06:04.895 "block_size": 4096, 00:06:04.895 "num_blocks": 256, 00:06:04.895 "uuid": "6ecb5d86-93a8-46a2-ae00-66736736e73d", 00:06:04.895 "assigned_rate_limits": { 00:06:04.895 "rw_ios_per_sec": 0, 00:06:04.895 "rw_mbytes_per_sec": 0, 00:06:04.895 "r_mbytes_per_sec": 0, 00:06:04.895 "w_mbytes_per_sec": 0 00:06:04.895 }, 00:06:04.895 "claimed": false, 00:06:04.895 "zoned": false, 00:06:04.895 "supported_io_types": { 00:06:04.895 "read": true, 00:06:04.895 "write": true, 00:06:04.895 "unmap": true, 00:06:04.895 "write_zeroes": true, 00:06:04.895 "flush": true, 00:06:04.895 "reset": true, 00:06:04.895 "compare": false, 00:06:04.895 "compare_and_write": false, 00:06:04.895 "abort": true, 00:06:04.895 "nvme_admin": false, 00:06:04.895 "nvme_io": false 00:06:04.895 }, 00:06:04.895 "memory_domains": [ 00:06:04.895 { 00:06:04.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:04.895 "dma_device_type": 2 00:06:04.895 } 00:06:04.895 ], 00:06:04.895 "driver_specific": {} 00:06:04.895 } 00:06:04.895 ]' 00:06:04.895 14:56:35 -- rpc/rpc.sh@32 -- # jq length 00:06:04.895 14:56:35 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:04.895 14:56:35 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:04.895 14:56:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.895 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:06:04.895 14:56:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.895 14:56:35 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:04.895 14:56:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.895 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:06:04.895 14:56:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.895 14:56:35 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:04.895 14:56:35 -- rpc/rpc.sh@36 -- # jq length 00:06:04.895 14:56:35 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:04.895 00:06:04.895 real 0m0.165s 00:06:04.895 user 0m0.107s 00:06:04.895 sys 0m0.023s 00:06:04.895 14:56:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.895 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:06:04.895 ************************************ 00:06:04.895 END TEST rpc_plugins 00:06:04.895 ************************************ 00:06:04.895 14:56:35 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:04.895 14:56:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.895 14:56:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.895 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.154 ************************************ 00:06:05.154 START TEST rpc_trace_cmd_test 00:06:05.154 ************************************ 00:06:05.154 14:56:35 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:06:05.154 14:56:35 -- rpc/rpc.sh@40 -- # local info 00:06:05.154 14:56:35 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:05.154 14:56:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.154 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.154 14:56:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.154 14:56:35 -- rpc/rpc.sh@42 -- # info='{ 00:06:05.154 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid65613", 00:06:05.154 "tpoint_group_mask": "0x8", 00:06:05.154 "iscsi_conn": { 00:06:05.154 "mask": "0x2", 00:06:05.154 "tpoint_mask": "0x0" 00:06:05.154 }, 00:06:05.154 "scsi": { 00:06:05.154 "mask": "0x4", 00:06:05.154 "tpoint_mask": "0x0" 00:06:05.154 }, 00:06:05.154 "bdev": { 00:06:05.154 "mask": "0x8", 00:06:05.154 "tpoint_mask": "0xffffffffffffffff" 00:06:05.154 }, 00:06:05.154 "nvmf_rdma": { 00:06:05.154 "mask": "0x10", 00:06:05.154 "tpoint_mask": "0x0" 00:06:05.154 }, 00:06:05.154 "nvmf_tcp": { 00:06:05.154 "mask": "0x20", 00:06:05.154 "tpoint_mask": "0x0" 00:06:05.154 }, 00:06:05.154 "ftl": { 00:06:05.154 "mask": "0x40", 00:06:05.154 "tpoint_mask": "0x0" 00:06:05.154 }, 00:06:05.154 "blobfs": { 00:06:05.154 "mask": "0x80", 00:06:05.154 "tpoint_mask": "0x0" 00:06:05.154 }, 00:06:05.154 "dsa": { 00:06:05.154 "mask": "0x200", 00:06:05.154 "tpoint_mask": "0x0" 00:06:05.154 }, 00:06:05.154 "thread": { 00:06:05.154 "mask": "0x400", 00:06:05.154 "tpoint_mask": "0x0" 00:06:05.154 }, 00:06:05.154 "nvme_pcie": { 00:06:05.154 "mask": "0x800", 00:06:05.154 "tpoint_mask": "0x0" 00:06:05.154 }, 00:06:05.154 "iaa": { 00:06:05.154 "mask": "0x1000", 00:06:05.154 "tpoint_mask": "0x0" 00:06:05.154 }, 00:06:05.154 "nvme_tcp": { 00:06:05.154 "mask": "0x2000", 00:06:05.154 "tpoint_mask": "0x0" 00:06:05.154 }, 00:06:05.154 "bdev_nvme": { 00:06:05.154 "mask": "0x4000", 00:06:05.154 "tpoint_mask": "0x0" 00:06:05.154 } 00:06:05.154 }' 00:06:05.154 14:56:35 -- rpc/rpc.sh@43 -- # jq length 00:06:05.154 14:56:35 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:06:05.154 14:56:35 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:05.154 14:56:35 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:05.155 14:56:35 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:05.155 14:56:35 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:05.155 14:56:35 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:05.155 14:56:35 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:05.155 14:56:35 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:05.413 14:56:35 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:05.413 00:06:05.413 real 0m0.278s 00:06:05.413 user 0m0.250s 00:06:05.413 sys 0m0.022s 00:06:05.413 14:56:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.413 14:56:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.413 ************************************ 00:06:05.413 END TEST rpc_trace_cmd_test 00:06:05.413 ************************************ 00:06:05.413 14:56:36 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:05.413 14:56:36 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:05.413 14:56:36 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:05.413 14:56:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.413 14:56:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.413 14:56:36 -- common/autotest_common.sh@10 -- # set +x 00:06:05.413 ************************************ 00:06:05.413 START TEST rpc_daemon_integrity 00:06:05.413 ************************************ 00:06:05.413 14:56:36 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:06:05.413 14:56:36 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:05.413 14:56:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.413 14:56:36 -- common/autotest_common.sh@10 -- # set +x 00:06:05.413 14:56:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.413 14:56:36 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:05.413 14:56:36 -- rpc/rpc.sh@13 -- # jq length 00:06:05.413 14:56:36 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:05.413 14:56:36 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:05.413 14:56:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.413 14:56:36 -- common/autotest_common.sh@10 -- # set +x 00:06:05.413 14:56:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.413 14:56:36 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:05.413 14:56:36 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:05.413 14:56:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.413 14:56:36 -- common/autotest_common.sh@10 -- # set +x 00:06:05.414 14:56:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.414 14:56:36 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:05.414 { 00:06:05.414 "name": "Malloc2", 00:06:05.414 "aliases": [ 00:06:05.414 "e78ae78e-fbe9-4b57-9f39-c7d74356d67d" 00:06:05.414 ], 00:06:05.414 "product_name": "Malloc disk", 00:06:05.414 "block_size": 512, 00:06:05.414 "num_blocks": 16384, 00:06:05.414 "uuid": "e78ae78e-fbe9-4b57-9f39-c7d74356d67d", 00:06:05.414 "assigned_rate_limits": { 00:06:05.414 "rw_ios_per_sec": 0, 00:06:05.414 "rw_mbytes_per_sec": 0, 00:06:05.414 "r_mbytes_per_sec": 0, 00:06:05.414 "w_mbytes_per_sec": 0 00:06:05.414 }, 00:06:05.414 "claimed": false, 00:06:05.414 "zoned": false, 00:06:05.414 "supported_io_types": { 00:06:05.414 "read": true, 00:06:05.414 "write": true, 00:06:05.414 "unmap": true, 00:06:05.414 "write_zeroes": true, 00:06:05.414 "flush": true, 00:06:05.414 "reset": true, 00:06:05.414 "compare": false, 00:06:05.414 "compare_and_write": false, 00:06:05.414 "abort": true, 00:06:05.414 "nvme_admin": false, 00:06:05.414 "nvme_io": false 00:06:05.414 }, 00:06:05.414 "memory_domains": [ 00:06:05.414 { 00:06:05.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.414 "dma_device_type": 2 00:06:05.414 } 00:06:05.414 ], 00:06:05.414 "driver_specific": {} 00:06:05.414 } 00:06:05.414 ]' 00:06:05.414 14:56:36 -- rpc/rpc.sh@17 -- # jq length 00:06:05.414 14:56:36 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:05.414 14:56:36 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:05.414 14:56:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.414 14:56:36 -- common/autotest_common.sh@10 -- # set +x 00:06:05.414 [2024-11-20 14:56:36.189700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:05.414 [2024-11-20 14:56:36.189777] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:05.414 [2024-11-20 14:56:36.189812] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10ccfe0 00:06:05.414 [2024-11-20 14:56:36.189823] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:05.414 [2024-11-20 14:56:36.191277] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:05.414 [2024-11-20 14:56:36.191313] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:05.414 Passthru0 00:06:05.414 14:56:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.414 14:56:36 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:05.414 14:56:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.414 14:56:36 -- common/autotest_common.sh@10 -- # set +x 00:06:05.673 14:56:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.673 14:56:36 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:05.673 { 00:06:05.673 "name": "Malloc2", 00:06:05.673 "aliases": [ 00:06:05.673 "e78ae78e-fbe9-4b57-9f39-c7d74356d67d" 00:06:05.673 ], 00:06:05.673 "product_name": "Malloc disk", 00:06:05.673 "block_size": 512, 00:06:05.673 "num_blocks": 16384, 00:06:05.673 "uuid": "e78ae78e-fbe9-4b57-9f39-c7d74356d67d", 00:06:05.673 "assigned_rate_limits": { 00:06:05.673 "rw_ios_per_sec": 0, 00:06:05.673 "rw_mbytes_per_sec": 0, 00:06:05.673 "r_mbytes_per_sec": 0, 00:06:05.673 "w_mbytes_per_sec": 0 00:06:05.673 }, 00:06:05.673 "claimed": true, 00:06:05.673 "claim_type": "exclusive_write", 00:06:05.673 "zoned": false, 00:06:05.673 "supported_io_types": { 00:06:05.673 "read": true, 00:06:05.673 "write": true, 00:06:05.673 "unmap": true, 00:06:05.673 "write_zeroes": true, 00:06:05.673 "flush": true, 00:06:05.673 "reset": true, 00:06:05.673 "compare": false, 00:06:05.673 "compare_and_write": false, 00:06:05.673 "abort": true, 00:06:05.673 "nvme_admin": false, 00:06:05.673 "nvme_io": false 00:06:05.673 }, 00:06:05.673 "memory_domains": [ 00:06:05.673 { 00:06:05.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.673 "dma_device_type": 2 00:06:05.673 } 00:06:05.673 ], 00:06:05.673 "driver_specific": {} 00:06:05.673 }, 00:06:05.673 { 00:06:05.673 "name": "Passthru0", 00:06:05.673 "aliases": [ 00:06:05.673 "15af02ce-76db-5d86-be92-b6d10e9cf632" 00:06:05.673 ], 00:06:05.673 "product_name": "passthru", 00:06:05.673 "block_size": 512, 00:06:05.673 "num_blocks": 16384, 00:06:05.673 "uuid": "15af02ce-76db-5d86-be92-b6d10e9cf632", 00:06:05.673 "assigned_rate_limits": { 00:06:05.673 "rw_ios_per_sec": 0, 00:06:05.673 "rw_mbytes_per_sec": 0, 00:06:05.673 "r_mbytes_per_sec": 0, 00:06:05.673 "w_mbytes_per_sec": 0 00:06:05.673 }, 00:06:05.673 "claimed": false, 00:06:05.673 "zoned": false, 00:06:05.673 "supported_io_types": { 00:06:05.673 "read": true, 00:06:05.673 "write": true, 00:06:05.673 "unmap": true, 00:06:05.673 "write_zeroes": true, 00:06:05.673 "flush": true, 00:06:05.673 "reset": true, 00:06:05.673 "compare": false, 00:06:05.673 "compare_and_write": false, 00:06:05.673 "abort": true, 00:06:05.673 "nvme_admin": false, 00:06:05.673 "nvme_io": false 00:06:05.673 }, 00:06:05.673 "memory_domains": [ 00:06:05.673 { 00:06:05.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.673 "dma_device_type": 2 00:06:05.673 } 00:06:05.673 ], 00:06:05.673 "driver_specific": { 00:06:05.673 "passthru": { 00:06:05.673 "name": "Passthru0", 00:06:05.673 "base_bdev_name": "Malloc2" 00:06:05.673 } 00:06:05.673 } 00:06:05.673 } 00:06:05.673 ]' 00:06:05.673 14:56:36 -- rpc/rpc.sh@21 -- # jq length 00:06:05.673 14:56:36 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:05.673 14:56:36 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:05.673 14:56:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.673 14:56:36 -- common/autotest_common.sh@10 -- # set +x 00:06:05.673 14:56:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.673 14:56:36 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:05.673 14:56:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.673 14:56:36 -- common/autotest_common.sh@10 -- # set +x 00:06:05.673 14:56:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.673 14:56:36 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:05.673 14:56:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.673 14:56:36 -- common/autotest_common.sh@10 -- # set +x 00:06:05.673 14:56:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.673 14:56:36 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:05.673 14:56:36 -- rpc/rpc.sh@26 -- # jq length 00:06:05.673 14:56:36 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:05.673 00:06:05.673 real 0m0.322s 00:06:05.673 user 0m0.221s 00:06:05.673 sys 0m0.038s 00:06:05.673 14:56:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.674 ************************************ 00:06:05.674 END TEST rpc_daemon_integrity 00:06:05.674 14:56:36 -- common/autotest_common.sh@10 -- # set +x 00:06:05.674 ************************************ 00:06:05.674 14:56:36 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:05.674 14:56:36 -- rpc/rpc.sh@84 -- # killprocess 65613 00:06:05.674 14:56:36 -- common/autotest_common.sh@936 -- # '[' -z 65613 ']' 00:06:05.674 14:56:36 -- common/autotest_common.sh@940 -- # kill -0 65613 00:06:05.674 14:56:36 -- common/autotest_common.sh@941 -- # uname 00:06:05.674 14:56:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:05.674 14:56:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65613 00:06:05.674 14:56:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:05.674 14:56:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:05.674 killing process with pid 65613 00:06:05.674 14:56:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65613' 00:06:05.674 14:56:36 -- common/autotest_common.sh@955 -- # kill 65613 00:06:05.674 14:56:36 -- common/autotest_common.sh@960 -- # wait 65613 00:06:05.933 00:06:05.933 real 0m2.848s 00:06:05.933 user 0m3.858s 00:06:05.933 sys 0m0.591s 00:06:05.933 14:56:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:05.933 14:56:36 -- common/autotest_common.sh@10 -- # set +x 00:06:05.933 ************************************ 00:06:05.933 END TEST rpc 00:06:05.933 ************************************ 00:06:05.933 14:56:36 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:05.933 14:56:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.933 14:56:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.933 14:56:36 -- common/autotest_common.sh@10 -- # set +x 00:06:05.933 ************************************ 00:06:05.933 START TEST rpc_client 00:06:05.933 ************************************ 00:06:05.933 14:56:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:06.192 * Looking for test storage... 00:06:06.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:06.192 14:56:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:06.192 14:56:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:06.192 14:56:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:06.192 14:56:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:06.192 14:56:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:06.192 14:56:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:06.192 14:56:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:06.192 14:56:36 -- scripts/common.sh@335 -- # IFS=.-: 00:06:06.192 14:56:36 -- scripts/common.sh@335 -- # read -ra ver1 00:06:06.192 14:56:36 -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.192 14:56:36 -- scripts/common.sh@336 -- # read -ra ver2 00:06:06.192 14:56:36 -- scripts/common.sh@337 -- # local 'op=<' 00:06:06.192 14:56:36 -- scripts/common.sh@339 -- # ver1_l=2 00:06:06.192 14:56:36 -- scripts/common.sh@340 -- # ver2_l=1 00:06:06.192 14:56:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:06.192 14:56:36 -- scripts/common.sh@343 -- # case "$op" in 00:06:06.192 14:56:36 -- scripts/common.sh@344 -- # : 1 00:06:06.192 14:56:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:06.192 14:56:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.192 14:56:36 -- scripts/common.sh@364 -- # decimal 1 00:06:06.192 14:56:36 -- scripts/common.sh@352 -- # local d=1 00:06:06.192 14:56:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.192 14:56:36 -- scripts/common.sh@354 -- # echo 1 00:06:06.192 14:56:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:06.192 14:56:36 -- scripts/common.sh@365 -- # decimal 2 00:06:06.192 14:56:36 -- scripts/common.sh@352 -- # local d=2 00:06:06.192 14:56:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.192 14:56:36 -- scripts/common.sh@354 -- # echo 2 00:06:06.192 14:56:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:06.192 14:56:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:06.192 14:56:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:06.192 14:56:36 -- scripts/common.sh@367 -- # return 0 00:06:06.192 14:56:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.192 14:56:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:06.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.192 --rc genhtml_branch_coverage=1 00:06:06.192 --rc genhtml_function_coverage=1 00:06:06.192 --rc genhtml_legend=1 00:06:06.192 --rc geninfo_all_blocks=1 00:06:06.192 --rc geninfo_unexecuted_blocks=1 00:06:06.192 00:06:06.192 ' 00:06:06.192 14:56:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:06.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.192 --rc genhtml_branch_coverage=1 00:06:06.192 --rc genhtml_function_coverage=1 00:06:06.192 --rc genhtml_legend=1 00:06:06.192 --rc geninfo_all_blocks=1 00:06:06.192 --rc geninfo_unexecuted_blocks=1 00:06:06.192 00:06:06.192 ' 00:06:06.192 14:56:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:06.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.192 --rc genhtml_branch_coverage=1 00:06:06.192 --rc genhtml_function_coverage=1 00:06:06.192 --rc genhtml_legend=1 00:06:06.192 --rc geninfo_all_blocks=1 00:06:06.192 --rc geninfo_unexecuted_blocks=1 00:06:06.192 00:06:06.192 ' 00:06:06.192 14:56:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:06.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.192 --rc genhtml_branch_coverage=1 00:06:06.192 --rc genhtml_function_coverage=1 00:06:06.192 --rc genhtml_legend=1 00:06:06.192 --rc geninfo_all_blocks=1 00:06:06.192 --rc geninfo_unexecuted_blocks=1 00:06:06.192 00:06:06.192 ' 00:06:06.192 14:56:36 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:06.192 OK 00:06:06.192 14:56:36 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:06.192 00:06:06.192 real 0m0.199s 00:06:06.192 user 0m0.136s 00:06:06.192 sys 0m0.075s 00:06:06.192 14:56:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:06.192 ************************************ 00:06:06.192 END TEST rpc_client 00:06:06.192 ************************************ 00:06:06.192 14:56:36 -- common/autotest_common.sh@10 -- # set +x 00:06:06.192 14:56:36 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:06.192 14:56:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:06.192 14:56:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.192 14:56:36 -- common/autotest_common.sh@10 -- # set +x 00:06:06.192 ************************************ 00:06:06.192 START TEST json_config 00:06:06.192 ************************************ 00:06:06.192 14:56:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:06.451 14:56:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:06.451 14:56:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:06.451 14:56:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:06.451 14:56:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:06.451 14:56:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:06.451 14:56:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:06.451 14:56:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:06.451 14:56:37 -- scripts/common.sh@335 -- # IFS=.-: 00:06:06.451 14:56:37 -- scripts/common.sh@335 -- # read -ra ver1 00:06:06.451 14:56:37 -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.451 14:56:37 -- scripts/common.sh@336 -- # read -ra ver2 00:06:06.451 14:56:37 -- scripts/common.sh@337 -- # local 'op=<' 00:06:06.451 14:56:37 -- scripts/common.sh@339 -- # ver1_l=2 00:06:06.451 14:56:37 -- scripts/common.sh@340 -- # ver2_l=1 00:06:06.451 14:56:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:06.451 14:56:37 -- scripts/common.sh@343 -- # case "$op" in 00:06:06.451 14:56:37 -- scripts/common.sh@344 -- # : 1 00:06:06.451 14:56:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:06.451 14:56:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.451 14:56:37 -- scripts/common.sh@364 -- # decimal 1 00:06:06.451 14:56:37 -- scripts/common.sh@352 -- # local d=1 00:06:06.451 14:56:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.451 14:56:37 -- scripts/common.sh@354 -- # echo 1 00:06:06.451 14:56:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:06.451 14:56:37 -- scripts/common.sh@365 -- # decimal 2 00:06:06.451 14:56:37 -- scripts/common.sh@352 -- # local d=2 00:06:06.451 14:56:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.451 14:56:37 -- scripts/common.sh@354 -- # echo 2 00:06:06.451 14:56:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:06.451 14:56:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:06.451 14:56:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:06.451 14:56:37 -- scripts/common.sh@367 -- # return 0 00:06:06.451 14:56:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.451 14:56:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:06.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.451 --rc genhtml_branch_coverage=1 00:06:06.451 --rc genhtml_function_coverage=1 00:06:06.451 --rc genhtml_legend=1 00:06:06.451 --rc geninfo_all_blocks=1 00:06:06.451 --rc geninfo_unexecuted_blocks=1 00:06:06.451 00:06:06.451 ' 00:06:06.451 14:56:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:06.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.451 --rc genhtml_branch_coverage=1 00:06:06.451 --rc genhtml_function_coverage=1 00:06:06.451 --rc genhtml_legend=1 00:06:06.451 --rc geninfo_all_blocks=1 00:06:06.451 --rc geninfo_unexecuted_blocks=1 00:06:06.451 00:06:06.451 ' 00:06:06.451 14:56:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:06.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.451 --rc genhtml_branch_coverage=1 00:06:06.451 --rc genhtml_function_coverage=1 00:06:06.451 --rc genhtml_legend=1 00:06:06.451 --rc geninfo_all_blocks=1 00:06:06.451 --rc geninfo_unexecuted_blocks=1 00:06:06.451 00:06:06.451 ' 00:06:06.451 14:56:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:06.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.451 --rc genhtml_branch_coverage=1 00:06:06.451 --rc genhtml_function_coverage=1 00:06:06.451 --rc genhtml_legend=1 00:06:06.451 --rc geninfo_all_blocks=1 00:06:06.451 --rc geninfo_unexecuted_blocks=1 00:06:06.451 00:06:06.451 ' 00:06:06.451 14:56:37 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:06.451 14:56:37 -- nvmf/common.sh@7 -- # uname -s 00:06:06.451 14:56:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.451 14:56:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.451 14:56:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.451 14:56:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.451 14:56:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.451 14:56:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.451 14:56:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.451 14:56:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.451 14:56:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.452 14:56:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.452 14:56:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:06:06.452 14:56:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:06:06.452 14:56:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.452 14:56:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.452 14:56:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:06.452 14:56:37 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:06.452 14:56:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.452 14:56:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.452 14:56:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.452 14:56:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.452 14:56:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.452 14:56:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.452 14:56:37 -- paths/export.sh@5 -- # export PATH 00:06:06.452 14:56:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.452 14:56:37 -- nvmf/common.sh@46 -- # : 0 00:06:06.452 14:56:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:06.452 14:56:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:06.452 14:56:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:06.452 14:56:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.452 14:56:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.452 14:56:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:06.452 14:56:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:06.452 14:56:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:06.452 14:56:37 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:06:06.452 14:56:37 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:06:06.452 14:56:37 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:06:06.452 14:56:37 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:06.452 14:56:37 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:06:06.452 14:56:37 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:06:06.452 14:56:37 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:06.452 14:56:37 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:06:06.452 14:56:37 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:06.452 14:56:37 -- json_config/json_config.sh@32 -- # declare -A app_params 00:06:06.452 14:56:37 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:06.452 14:56:37 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:06:06.452 14:56:37 -- json_config/json_config.sh@43 -- # last_event_id=0 00:06:06.452 INFO: JSON configuration test init 00:06:06.452 14:56:37 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:06.452 14:56:37 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:06:06.452 14:56:37 -- json_config/json_config.sh@420 -- # json_config_test_init 00:06:06.452 14:56:37 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:06:06.452 14:56:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:06.452 14:56:37 -- common/autotest_common.sh@10 -- # set +x 00:06:06.452 14:56:37 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:06:06.452 14:56:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:06.452 14:56:37 -- common/autotest_common.sh@10 -- # set +x 00:06:06.452 Waiting for target to run... 00:06:06.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:06.452 14:56:37 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:06:06.452 14:56:37 -- json_config/json_config.sh@98 -- # local app=target 00:06:06.452 14:56:37 -- json_config/json_config.sh@99 -- # shift 00:06:06.452 14:56:37 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:06.452 14:56:37 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:06.452 14:56:37 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:06.452 14:56:37 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:06.452 14:56:37 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:06.452 14:56:37 -- json_config/json_config.sh@111 -- # app_pid[$app]=65866 00:06:06.452 14:56:37 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:06.452 14:56:37 -- json_config/json_config.sh@114 -- # waitforlisten 65866 /var/tmp/spdk_tgt.sock 00:06:06.452 14:56:37 -- common/autotest_common.sh@829 -- # '[' -z 65866 ']' 00:06:06.452 14:56:37 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:06.452 14:56:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:06.452 14:56:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.452 14:56:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:06.452 14:56:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.452 14:56:37 -- common/autotest_common.sh@10 -- # set +x 00:06:06.452 [2024-11-20 14:56:37.219084] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:06.452 [2024-11-20 14:56:37.219427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65866 ] 00:06:07.019 [2024-11-20 14:56:37.528565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.019 [2024-11-20 14:56:37.554330] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:07.019 [2024-11-20 14:56:37.554779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.588 14:56:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:07.588 14:56:38 -- common/autotest_common.sh@862 -- # return 0 00:06:07.588 14:56:38 -- json_config/json_config.sh@115 -- # echo '' 00:06:07.588 00:06:07.588 14:56:38 -- json_config/json_config.sh@322 -- # create_accel_config 00:06:07.588 14:56:38 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:06:07.588 14:56:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:07.588 14:56:38 -- common/autotest_common.sh@10 -- # set +x 00:06:07.588 14:56:38 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:06:07.588 14:56:38 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:06:07.588 14:56:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:07.588 14:56:38 -- common/autotest_common.sh@10 -- # set +x 00:06:07.588 14:56:38 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:07.588 14:56:38 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:06:07.588 14:56:38 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:08.154 14:56:38 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:06:08.154 14:56:38 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:06:08.154 14:56:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:08.154 14:56:38 -- common/autotest_common.sh@10 -- # set +x 00:06:08.154 14:56:38 -- json_config/json_config.sh@48 -- # local ret=0 00:06:08.154 14:56:38 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:08.154 14:56:38 -- json_config/json_config.sh@49 -- # local enabled_types 00:06:08.154 14:56:38 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:08.154 14:56:38 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:08.154 14:56:38 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:08.412 14:56:39 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:08.412 14:56:39 -- json_config/json_config.sh@51 -- # local get_types 00:06:08.412 14:56:39 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:08.412 14:56:39 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:06:08.412 14:56:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:08.412 14:56:39 -- common/autotest_common.sh@10 -- # set +x 00:06:08.412 14:56:39 -- json_config/json_config.sh@58 -- # return 0 00:06:08.412 14:56:39 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:06:08.412 14:56:39 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:06:08.412 14:56:39 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:06:08.412 14:56:39 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:06:08.412 14:56:39 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:06:08.412 14:56:39 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:06:08.412 14:56:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:08.412 14:56:39 -- common/autotest_common.sh@10 -- # set +x 00:06:08.412 14:56:39 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:08.412 14:56:39 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:06:08.412 14:56:39 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:06:08.412 14:56:39 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:08.412 14:56:39 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:08.669 MallocForNvmf0 00:06:08.669 14:56:39 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:08.669 14:56:39 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:08.927 MallocForNvmf1 00:06:08.927 14:56:39 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:08.927 14:56:39 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:09.187 [2024-11-20 14:56:39.826083] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.187 14:56:39 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:09.187 14:56:39 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:09.446 14:56:40 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:09.446 14:56:40 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:09.704 14:56:40 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:09.704 14:56:40 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:09.962 14:56:40 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:09.962 14:56:40 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:10.221 [2024-11-20 14:56:40.890834] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:10.221 14:56:40 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:06:10.221 14:56:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:10.221 14:56:40 -- common/autotest_common.sh@10 -- # set +x 00:06:10.221 14:56:40 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:06:10.221 14:56:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:10.221 14:56:40 -- common/autotest_common.sh@10 -- # set +x 00:06:10.221 14:56:40 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:06:10.221 14:56:40 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:10.221 14:56:40 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:10.479 MallocBdevForConfigChangeCheck 00:06:10.479 14:56:41 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:06:10.480 14:56:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:10.480 14:56:41 -- common/autotest_common.sh@10 -- # set +x 00:06:10.738 14:56:41 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:06:10.738 14:56:41 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:10.997 INFO: shutting down applications... 00:06:10.997 14:56:41 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:06:10.997 14:56:41 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:06:10.997 14:56:41 -- json_config/json_config.sh@431 -- # json_config_clear target 00:06:10.997 14:56:41 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:06:10.997 14:56:41 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:11.255 Calling clear_iscsi_subsystem 00:06:11.255 Calling clear_nvmf_subsystem 00:06:11.255 Calling clear_nbd_subsystem 00:06:11.255 Calling clear_ublk_subsystem 00:06:11.255 Calling clear_vhost_blk_subsystem 00:06:11.255 Calling clear_vhost_scsi_subsystem 00:06:11.255 Calling clear_scheduler_subsystem 00:06:11.255 Calling clear_bdev_subsystem 00:06:11.255 Calling clear_accel_subsystem 00:06:11.255 Calling clear_vmd_subsystem 00:06:11.255 Calling clear_sock_subsystem 00:06:11.255 Calling clear_iobuf_subsystem 00:06:11.255 14:56:42 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:11.255 14:56:42 -- json_config/json_config.sh@396 -- # count=100 00:06:11.255 14:56:42 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:06:11.255 14:56:42 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:11.255 14:56:42 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:11.255 14:56:42 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:11.821 14:56:42 -- json_config/json_config.sh@398 -- # break 00:06:11.821 14:56:42 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:06:11.821 14:56:42 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:06:11.821 14:56:42 -- json_config/json_config.sh@120 -- # local app=target 00:06:11.821 14:56:42 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:06:11.821 14:56:42 -- json_config/json_config.sh@124 -- # [[ -n 65866 ]] 00:06:11.821 14:56:42 -- json_config/json_config.sh@127 -- # kill -SIGINT 65866 00:06:11.821 14:56:42 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:06:11.821 14:56:42 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:11.821 14:56:42 -- json_config/json_config.sh@130 -- # kill -0 65866 00:06:11.821 14:56:42 -- json_config/json_config.sh@134 -- # sleep 0.5 00:06:12.390 14:56:42 -- json_config/json_config.sh@129 -- # (( i++ )) 00:06:12.390 14:56:42 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:12.390 14:56:42 -- json_config/json_config.sh@130 -- # kill -0 65866 00:06:12.390 14:56:42 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:06:12.390 14:56:42 -- json_config/json_config.sh@132 -- # break 00:06:12.390 14:56:42 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:06:12.390 SPDK target shutdown done 00:06:12.390 14:56:42 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:06:12.390 INFO: relaunching applications... 00:06:12.390 14:56:42 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:06:12.390 14:56:42 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:12.390 14:56:42 -- json_config/json_config.sh@98 -- # local app=target 00:06:12.390 14:56:42 -- json_config/json_config.sh@99 -- # shift 00:06:12.390 14:56:42 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:12.390 14:56:42 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:12.390 14:56:42 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:12.390 14:56:42 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:12.390 14:56:42 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:12.390 14:56:42 -- json_config/json_config.sh@111 -- # app_pid[$app]=66062 00:06:12.390 Waiting for target to run... 00:06:12.390 14:56:42 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:12.390 14:56:42 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:12.390 14:56:42 -- json_config/json_config.sh@114 -- # waitforlisten 66062 /var/tmp/spdk_tgt.sock 00:06:12.390 14:56:42 -- common/autotest_common.sh@829 -- # '[' -z 66062 ']' 00:06:12.390 14:56:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:12.390 14:56:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:12.390 14:56:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:12.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:12.390 14:56:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:12.390 14:56:42 -- common/autotest_common.sh@10 -- # set +x 00:06:12.390 [2024-11-20 14:56:42.994339] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:12.390 [2024-11-20 14:56:42.994434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66062 ] 00:06:12.649 [2024-11-20 14:56:43.275605] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.649 [2024-11-20 14:56:43.297200] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:12.649 [2024-11-20 14:56:43.297357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.907 [2024-11-20 14:56:43.594952] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:12.908 [2024-11-20 14:56:43.627026] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:13.475 00:06:13.475 INFO: Checking if target configuration is the same... 00:06:13.475 14:56:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:13.475 14:56:44 -- common/autotest_common.sh@862 -- # return 0 00:06:13.475 14:56:44 -- json_config/json_config.sh@115 -- # echo '' 00:06:13.475 14:56:44 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:06:13.475 14:56:44 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:13.475 14:56:44 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:13.475 14:56:44 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:06:13.475 14:56:44 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:13.475 + '[' 2 -ne 2 ']' 00:06:13.475 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:13.475 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:13.475 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:13.475 +++ basename /dev/fd/62 00:06:13.475 ++ mktemp /tmp/62.XXX 00:06:13.475 + tmp_file_1=/tmp/62.faN 00:06:13.475 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:13.475 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:13.475 + tmp_file_2=/tmp/spdk_tgt_config.json.ddM 00:06:13.475 + ret=0 00:06:13.475 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:13.733 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:13.992 + diff -u /tmp/62.faN /tmp/spdk_tgt_config.json.ddM 00:06:13.992 INFO: JSON config files are the same 00:06:13.992 + echo 'INFO: JSON config files are the same' 00:06:13.992 + rm /tmp/62.faN /tmp/spdk_tgt_config.json.ddM 00:06:13.992 + exit 0 00:06:13.992 INFO: changing configuration and checking if this can be detected... 00:06:13.992 14:56:44 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:06:13.992 14:56:44 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:13.992 14:56:44 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:13.992 14:56:44 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:14.251 14:56:44 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:06:14.251 14:56:44 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:14.251 14:56:44 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:14.251 + '[' 2 -ne 2 ']' 00:06:14.251 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:14.251 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:14.251 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:14.251 +++ basename /dev/fd/62 00:06:14.251 ++ mktemp /tmp/62.XXX 00:06:14.251 + tmp_file_1=/tmp/62.XmC 00:06:14.251 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:14.251 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:14.251 + tmp_file_2=/tmp/spdk_tgt_config.json.OZb 00:06:14.251 + ret=0 00:06:14.251 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:14.509 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:14.510 + diff -u /tmp/62.XmC /tmp/spdk_tgt_config.json.OZb 00:06:14.510 + ret=1 00:06:14.510 + echo '=== Start of file: /tmp/62.XmC ===' 00:06:14.510 + cat /tmp/62.XmC 00:06:14.510 + echo '=== End of file: /tmp/62.XmC ===' 00:06:14.510 + echo '' 00:06:14.510 + echo '=== Start of file: /tmp/spdk_tgt_config.json.OZb ===' 00:06:14.510 + cat /tmp/spdk_tgt_config.json.OZb 00:06:14.510 + echo '=== End of file: /tmp/spdk_tgt_config.json.OZb ===' 00:06:14.510 + echo '' 00:06:14.510 + rm /tmp/62.XmC /tmp/spdk_tgt_config.json.OZb 00:06:14.510 + exit 1 00:06:14.510 INFO: configuration change detected. 00:06:14.510 14:56:45 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:06:14.510 14:56:45 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:06:14.510 14:56:45 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:06:14.510 14:56:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:14.510 14:56:45 -- common/autotest_common.sh@10 -- # set +x 00:06:14.510 14:56:45 -- json_config/json_config.sh@360 -- # local ret=0 00:06:14.510 14:56:45 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:06:14.510 14:56:45 -- json_config/json_config.sh@370 -- # [[ -n 66062 ]] 00:06:14.510 14:56:45 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:06:14.510 14:56:45 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:06:14.510 14:56:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:14.510 14:56:45 -- common/autotest_common.sh@10 -- # set +x 00:06:14.510 14:56:45 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:06:14.510 14:56:45 -- json_config/json_config.sh@246 -- # uname -s 00:06:14.510 14:56:45 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:06:14.510 14:56:45 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:06:14.769 14:56:45 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:06:14.769 14:56:45 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:06:14.769 14:56:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:14.769 14:56:45 -- common/autotest_common.sh@10 -- # set +x 00:06:14.769 14:56:45 -- json_config/json_config.sh@376 -- # killprocess 66062 00:06:14.769 14:56:45 -- common/autotest_common.sh@936 -- # '[' -z 66062 ']' 00:06:14.769 14:56:45 -- common/autotest_common.sh@940 -- # kill -0 66062 00:06:14.769 14:56:45 -- common/autotest_common.sh@941 -- # uname 00:06:14.769 14:56:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:14.769 14:56:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66062 00:06:14.769 killing process with pid 66062 00:06:14.769 14:56:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:14.769 14:56:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:14.769 14:56:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66062' 00:06:14.769 14:56:45 -- common/autotest_common.sh@955 -- # kill 66062 00:06:14.769 14:56:45 -- common/autotest_common.sh@960 -- # wait 66062 00:06:14.769 14:56:45 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:14.769 14:56:45 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:06:14.769 14:56:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:14.769 14:56:45 -- common/autotest_common.sh@10 -- # set +x 00:06:15.029 INFO: Success 00:06:15.029 14:56:45 -- json_config/json_config.sh@381 -- # return 0 00:06:15.029 14:56:45 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:06:15.029 00:06:15.029 real 0m8.614s 00:06:15.029 user 0m12.703s 00:06:15.029 sys 0m1.455s 00:06:15.029 14:56:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:15.029 14:56:45 -- common/autotest_common.sh@10 -- # set +x 00:06:15.029 ************************************ 00:06:15.029 END TEST json_config 00:06:15.029 ************************************ 00:06:15.029 14:56:45 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:15.029 14:56:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.029 14:56:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.029 14:56:45 -- common/autotest_common.sh@10 -- # set +x 00:06:15.029 ************************************ 00:06:15.029 START TEST json_config_extra_key 00:06:15.029 ************************************ 00:06:15.029 14:56:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:15.029 14:56:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:15.029 14:56:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:15.029 14:56:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:15.029 14:56:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:15.029 14:56:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:15.029 14:56:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:15.029 14:56:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:15.029 14:56:45 -- scripts/common.sh@335 -- # IFS=.-: 00:06:15.029 14:56:45 -- scripts/common.sh@335 -- # read -ra ver1 00:06:15.029 14:56:45 -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.029 14:56:45 -- scripts/common.sh@336 -- # read -ra ver2 00:06:15.029 14:56:45 -- scripts/common.sh@337 -- # local 'op=<' 00:06:15.029 14:56:45 -- scripts/common.sh@339 -- # ver1_l=2 00:06:15.029 14:56:45 -- scripts/common.sh@340 -- # ver2_l=1 00:06:15.029 14:56:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:15.029 14:56:45 -- scripts/common.sh@343 -- # case "$op" in 00:06:15.029 14:56:45 -- scripts/common.sh@344 -- # : 1 00:06:15.029 14:56:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:15.029 14:56:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.029 14:56:45 -- scripts/common.sh@364 -- # decimal 1 00:06:15.029 14:56:45 -- scripts/common.sh@352 -- # local d=1 00:06:15.029 14:56:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.029 14:56:45 -- scripts/common.sh@354 -- # echo 1 00:06:15.029 14:56:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:15.029 14:56:45 -- scripts/common.sh@365 -- # decimal 2 00:06:15.029 14:56:45 -- scripts/common.sh@352 -- # local d=2 00:06:15.029 14:56:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.029 14:56:45 -- scripts/common.sh@354 -- # echo 2 00:06:15.029 14:56:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:15.029 14:56:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:15.029 14:56:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:15.029 14:56:45 -- scripts/common.sh@367 -- # return 0 00:06:15.029 14:56:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.029 14:56:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:15.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.029 --rc genhtml_branch_coverage=1 00:06:15.029 --rc genhtml_function_coverage=1 00:06:15.029 --rc genhtml_legend=1 00:06:15.029 --rc geninfo_all_blocks=1 00:06:15.029 --rc geninfo_unexecuted_blocks=1 00:06:15.029 00:06:15.029 ' 00:06:15.029 14:56:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:15.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.030 --rc genhtml_branch_coverage=1 00:06:15.030 --rc genhtml_function_coverage=1 00:06:15.030 --rc genhtml_legend=1 00:06:15.030 --rc geninfo_all_blocks=1 00:06:15.030 --rc geninfo_unexecuted_blocks=1 00:06:15.030 00:06:15.030 ' 00:06:15.030 14:56:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:15.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.030 --rc genhtml_branch_coverage=1 00:06:15.030 --rc genhtml_function_coverage=1 00:06:15.030 --rc genhtml_legend=1 00:06:15.030 --rc geninfo_all_blocks=1 00:06:15.030 --rc geninfo_unexecuted_blocks=1 00:06:15.030 00:06:15.030 ' 00:06:15.030 14:56:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:15.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.030 --rc genhtml_branch_coverage=1 00:06:15.030 --rc genhtml_function_coverage=1 00:06:15.030 --rc genhtml_legend=1 00:06:15.030 --rc geninfo_all_blocks=1 00:06:15.030 --rc geninfo_unexecuted_blocks=1 00:06:15.030 00:06:15.030 ' 00:06:15.030 14:56:45 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:15.030 14:56:45 -- nvmf/common.sh@7 -- # uname -s 00:06:15.030 14:56:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.030 14:56:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.030 14:56:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.030 14:56:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.030 14:56:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.030 14:56:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.030 14:56:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.030 14:56:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.030 14:56:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.030 14:56:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.030 14:56:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:06:15.030 14:56:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:06:15.030 14:56:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.030 14:56:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.030 14:56:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:15.030 14:56:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:15.030 14:56:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.030 14:56:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.030 14:56:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.030 14:56:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.030 14:56:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.030 14:56:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.030 14:56:45 -- paths/export.sh@5 -- # export PATH 00:06:15.030 14:56:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.030 14:56:45 -- nvmf/common.sh@46 -- # : 0 00:06:15.030 14:56:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:15.030 14:56:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:15.030 14:56:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:15.030 14:56:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.030 14:56:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.030 14:56:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:15.030 14:56:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:15.030 14:56:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:15.030 14:56:45 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:06:15.030 14:56:45 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:06:15.030 14:56:45 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:15.030 14:56:45 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:06:15.030 14:56:45 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:15.030 INFO: launching applications... 00:06:15.030 Waiting for target to run... 00:06:15.030 14:56:45 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:06:15.030 14:56:45 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:15.030 14:56:45 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:06:15.030 14:56:45 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:15.030 14:56:45 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:06:15.030 14:56:45 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:15.030 14:56:45 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:06:15.030 14:56:45 -- json_config/json_config_extra_key.sh@25 -- # shift 00:06:15.030 14:56:45 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:06:15.030 14:56:45 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:06:15.030 14:56:45 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=66214 00:06:15.030 14:56:45 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:06:15.030 14:56:45 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:15.030 14:56:45 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 66214 /var/tmp/spdk_tgt.sock 00:06:15.030 14:56:45 -- common/autotest_common.sh@829 -- # '[' -z 66214 ']' 00:06:15.030 14:56:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:15.030 14:56:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.030 14:56:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:15.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:15.030 14:56:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.030 14:56:45 -- common/autotest_common.sh@10 -- # set +x 00:06:15.289 [2024-11-20 14:56:45.875260] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:15.289 [2024-11-20 14:56:45.875523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66214 ] 00:06:15.548 [2024-11-20 14:56:46.157988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.548 [2024-11-20 14:56:46.179737] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:15.548 [2024-11-20 14:56:46.180105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.484 14:56:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.484 14:56:46 -- common/autotest_common.sh@862 -- # return 0 00:06:16.484 14:56:46 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:06:16.484 00:06:16.484 14:56:46 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:06:16.484 INFO: shutting down applications... 00:06:16.484 14:56:46 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:06:16.484 14:56:46 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:06:16.484 14:56:46 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:06:16.484 14:56:46 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 66214 ]] 00:06:16.484 14:56:46 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 66214 00:06:16.484 14:56:46 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:06:16.484 14:56:46 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:16.484 14:56:46 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66214 00:06:16.484 14:56:46 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:16.743 14:56:47 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:16.743 14:56:47 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:16.743 14:56:47 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66214 00:06:16.743 14:56:47 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:06:16.743 14:56:47 -- json_config/json_config_extra_key.sh@52 -- # break 00:06:16.743 SPDK target shutdown done 00:06:16.743 Success 00:06:16.743 14:56:47 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:06:16.743 14:56:47 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:06:16.743 14:56:47 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:06:16.743 ************************************ 00:06:16.743 END TEST json_config_extra_key 00:06:16.743 ************************************ 00:06:16.743 00:06:16.743 real 0m1.860s 00:06:16.743 user 0m1.814s 00:06:16.743 sys 0m0.310s 00:06:16.743 14:56:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.743 14:56:47 -- common/autotest_common.sh@10 -- # set +x 00:06:16.743 14:56:47 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:16.743 14:56:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:16.743 14:56:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.743 14:56:47 -- common/autotest_common.sh@10 -- # set +x 00:06:16.743 ************************************ 00:06:16.743 START TEST alias_rpc 00:06:16.743 ************************************ 00:06:16.743 14:56:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:17.002 * Looking for test storage... 00:06:17.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:17.002 14:56:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:17.002 14:56:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:17.002 14:56:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:17.002 14:56:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:17.002 14:56:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:17.002 14:56:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:17.002 14:56:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:17.002 14:56:47 -- scripts/common.sh@335 -- # IFS=.-: 00:06:17.002 14:56:47 -- scripts/common.sh@335 -- # read -ra ver1 00:06:17.002 14:56:47 -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.002 14:56:47 -- scripts/common.sh@336 -- # read -ra ver2 00:06:17.002 14:56:47 -- scripts/common.sh@337 -- # local 'op=<' 00:06:17.002 14:56:47 -- scripts/common.sh@339 -- # ver1_l=2 00:06:17.002 14:56:47 -- scripts/common.sh@340 -- # ver2_l=1 00:06:17.002 14:56:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:17.002 14:56:47 -- scripts/common.sh@343 -- # case "$op" in 00:06:17.002 14:56:47 -- scripts/common.sh@344 -- # : 1 00:06:17.002 14:56:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:17.002 14:56:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.002 14:56:47 -- scripts/common.sh@364 -- # decimal 1 00:06:17.002 14:56:47 -- scripts/common.sh@352 -- # local d=1 00:06:17.002 14:56:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.002 14:56:47 -- scripts/common.sh@354 -- # echo 1 00:06:17.002 14:56:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:17.002 14:56:47 -- scripts/common.sh@365 -- # decimal 2 00:06:17.002 14:56:47 -- scripts/common.sh@352 -- # local d=2 00:06:17.002 14:56:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.002 14:56:47 -- scripts/common.sh@354 -- # echo 2 00:06:17.002 14:56:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:17.002 14:56:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:17.002 14:56:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:17.002 14:56:47 -- scripts/common.sh@367 -- # return 0 00:06:17.002 14:56:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.002 14:56:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:17.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.002 --rc genhtml_branch_coverage=1 00:06:17.002 --rc genhtml_function_coverage=1 00:06:17.002 --rc genhtml_legend=1 00:06:17.002 --rc geninfo_all_blocks=1 00:06:17.002 --rc geninfo_unexecuted_blocks=1 00:06:17.002 00:06:17.002 ' 00:06:17.002 14:56:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:17.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.002 --rc genhtml_branch_coverage=1 00:06:17.002 --rc genhtml_function_coverage=1 00:06:17.002 --rc genhtml_legend=1 00:06:17.002 --rc geninfo_all_blocks=1 00:06:17.002 --rc geninfo_unexecuted_blocks=1 00:06:17.002 00:06:17.002 ' 00:06:17.002 14:56:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:17.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.002 --rc genhtml_branch_coverage=1 00:06:17.002 --rc genhtml_function_coverage=1 00:06:17.002 --rc genhtml_legend=1 00:06:17.002 --rc geninfo_all_blocks=1 00:06:17.002 --rc geninfo_unexecuted_blocks=1 00:06:17.002 00:06:17.002 ' 00:06:17.002 14:56:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:17.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.002 --rc genhtml_branch_coverage=1 00:06:17.002 --rc genhtml_function_coverage=1 00:06:17.002 --rc genhtml_legend=1 00:06:17.002 --rc geninfo_all_blocks=1 00:06:17.002 --rc geninfo_unexecuted_blocks=1 00:06:17.002 00:06:17.002 ' 00:06:17.002 14:56:47 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:17.002 14:56:47 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=66281 00:06:17.002 14:56:47 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 66281 00:06:17.003 14:56:47 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:17.003 14:56:47 -- common/autotest_common.sh@829 -- # '[' -z 66281 ']' 00:06:17.003 14:56:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.003 14:56:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.003 14:56:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.003 14:56:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.003 14:56:47 -- common/autotest_common.sh@10 -- # set +x 00:06:17.003 [2024-11-20 14:56:47.781505] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:17.003 [2024-11-20 14:56:47.782074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66281 ] 00:06:17.263 [2024-11-20 14:56:47.922107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.263 [2024-11-20 14:56:47.981956] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:17.263 [2024-11-20 14:56:47.982581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.257 14:56:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.257 14:56:48 -- common/autotest_common.sh@862 -- # return 0 00:06:18.257 14:56:48 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:18.516 14:56:49 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 66281 00:06:18.516 14:56:49 -- common/autotest_common.sh@936 -- # '[' -z 66281 ']' 00:06:18.516 14:56:49 -- common/autotest_common.sh@940 -- # kill -0 66281 00:06:18.516 14:56:49 -- common/autotest_common.sh@941 -- # uname 00:06:18.516 14:56:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:18.516 14:56:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66281 00:06:18.516 14:56:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:18.516 killing process with pid 66281 00:06:18.516 14:56:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:18.516 14:56:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66281' 00:06:18.516 14:56:49 -- common/autotest_common.sh@955 -- # kill 66281 00:06:18.516 14:56:49 -- common/autotest_common.sh@960 -- # wait 66281 00:06:18.782 ************************************ 00:06:18.782 END TEST alias_rpc 00:06:18.782 ************************************ 00:06:18.782 00:06:18.782 real 0m1.859s 00:06:18.782 user 0m2.196s 00:06:18.782 sys 0m0.441s 00:06:18.782 14:56:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:18.782 14:56:49 -- common/autotest_common.sh@10 -- # set +x 00:06:18.782 14:56:49 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:06:18.782 14:56:49 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:18.782 14:56:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:18.782 14:56:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.782 14:56:49 -- common/autotest_common.sh@10 -- # set +x 00:06:18.782 ************************************ 00:06:18.782 START TEST spdkcli_tcp 00:06:18.782 ************************************ 00:06:18.782 14:56:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:18.782 * Looking for test storage... 00:06:18.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:18.782 14:56:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:18.782 14:56:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:18.782 14:56:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:19.042 14:56:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:19.042 14:56:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:19.042 14:56:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:19.042 14:56:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:19.042 14:56:49 -- scripts/common.sh@335 -- # IFS=.-: 00:06:19.042 14:56:49 -- scripts/common.sh@335 -- # read -ra ver1 00:06:19.042 14:56:49 -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.042 14:56:49 -- scripts/common.sh@336 -- # read -ra ver2 00:06:19.042 14:56:49 -- scripts/common.sh@337 -- # local 'op=<' 00:06:19.042 14:56:49 -- scripts/common.sh@339 -- # ver1_l=2 00:06:19.042 14:56:49 -- scripts/common.sh@340 -- # ver2_l=1 00:06:19.042 14:56:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:19.042 14:56:49 -- scripts/common.sh@343 -- # case "$op" in 00:06:19.042 14:56:49 -- scripts/common.sh@344 -- # : 1 00:06:19.042 14:56:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:19.042 14:56:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.042 14:56:49 -- scripts/common.sh@364 -- # decimal 1 00:06:19.042 14:56:49 -- scripts/common.sh@352 -- # local d=1 00:06:19.042 14:56:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.042 14:56:49 -- scripts/common.sh@354 -- # echo 1 00:06:19.042 14:56:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:19.042 14:56:49 -- scripts/common.sh@365 -- # decimal 2 00:06:19.042 14:56:49 -- scripts/common.sh@352 -- # local d=2 00:06:19.042 14:56:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.042 14:56:49 -- scripts/common.sh@354 -- # echo 2 00:06:19.042 14:56:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:19.042 14:56:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:19.042 14:56:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:19.042 14:56:49 -- scripts/common.sh@367 -- # return 0 00:06:19.042 14:56:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.042 14:56:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:19.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.042 --rc genhtml_branch_coverage=1 00:06:19.042 --rc genhtml_function_coverage=1 00:06:19.042 --rc genhtml_legend=1 00:06:19.042 --rc geninfo_all_blocks=1 00:06:19.042 --rc geninfo_unexecuted_blocks=1 00:06:19.042 00:06:19.042 ' 00:06:19.042 14:56:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:19.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.042 --rc genhtml_branch_coverage=1 00:06:19.042 --rc genhtml_function_coverage=1 00:06:19.042 --rc genhtml_legend=1 00:06:19.042 --rc geninfo_all_blocks=1 00:06:19.042 --rc geninfo_unexecuted_blocks=1 00:06:19.042 00:06:19.042 ' 00:06:19.042 14:56:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:19.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.042 --rc genhtml_branch_coverage=1 00:06:19.042 --rc genhtml_function_coverage=1 00:06:19.043 --rc genhtml_legend=1 00:06:19.043 --rc geninfo_all_blocks=1 00:06:19.043 --rc geninfo_unexecuted_blocks=1 00:06:19.043 00:06:19.043 ' 00:06:19.043 14:56:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:19.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.043 --rc genhtml_branch_coverage=1 00:06:19.043 --rc genhtml_function_coverage=1 00:06:19.043 --rc genhtml_legend=1 00:06:19.043 --rc geninfo_all_blocks=1 00:06:19.043 --rc geninfo_unexecuted_blocks=1 00:06:19.043 00:06:19.043 ' 00:06:19.043 14:56:49 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:19.043 14:56:49 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:19.043 14:56:49 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:19.043 14:56:49 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:19.043 14:56:49 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:19.043 14:56:49 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:19.043 14:56:49 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:19.043 14:56:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:19.043 14:56:49 -- common/autotest_common.sh@10 -- # set +x 00:06:19.043 14:56:49 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=66364 00:06:19.043 14:56:49 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:19.043 14:56:49 -- spdkcli/tcp.sh@27 -- # waitforlisten 66364 00:06:19.043 14:56:49 -- common/autotest_common.sh@829 -- # '[' -z 66364 ']' 00:06:19.043 14:56:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.043 14:56:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.043 14:56:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.043 14:56:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.043 14:56:49 -- common/autotest_common.sh@10 -- # set +x 00:06:19.043 [2024-11-20 14:56:49.718460] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:19.043 [2024-11-20 14:56:49.718569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66364 ] 00:06:19.301 [2024-11-20 14:56:49.855278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.301 [2024-11-20 14:56:49.903190] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:19.301 [2024-11-20 14:56:49.903506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.301 [2024-11-20 14:56:49.903602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.237 14:56:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.237 14:56:50 -- common/autotest_common.sh@862 -- # return 0 00:06:20.237 14:56:50 -- spdkcli/tcp.sh@31 -- # socat_pid=66381 00:06:20.237 14:56:50 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:20.237 14:56:50 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:20.496 [ 00:06:20.496 "bdev_malloc_delete", 00:06:20.496 "bdev_malloc_create", 00:06:20.496 "bdev_null_resize", 00:06:20.496 "bdev_null_delete", 00:06:20.496 "bdev_null_create", 00:06:20.496 "bdev_nvme_cuse_unregister", 00:06:20.496 "bdev_nvme_cuse_register", 00:06:20.496 "bdev_opal_new_user", 00:06:20.496 "bdev_opal_set_lock_state", 00:06:20.496 "bdev_opal_delete", 00:06:20.496 "bdev_opal_get_info", 00:06:20.496 "bdev_opal_create", 00:06:20.496 "bdev_nvme_opal_revert", 00:06:20.496 "bdev_nvme_opal_init", 00:06:20.496 "bdev_nvme_send_cmd", 00:06:20.496 "bdev_nvme_get_path_iostat", 00:06:20.496 "bdev_nvme_get_mdns_discovery_info", 00:06:20.496 "bdev_nvme_stop_mdns_discovery", 00:06:20.496 "bdev_nvme_start_mdns_discovery", 00:06:20.496 "bdev_nvme_set_multipath_policy", 00:06:20.496 "bdev_nvme_set_preferred_path", 00:06:20.496 "bdev_nvme_get_io_paths", 00:06:20.496 "bdev_nvme_remove_error_injection", 00:06:20.496 "bdev_nvme_add_error_injection", 00:06:20.496 "bdev_nvme_get_discovery_info", 00:06:20.496 "bdev_nvme_stop_discovery", 00:06:20.496 "bdev_nvme_start_discovery", 00:06:20.496 "bdev_nvme_get_controller_health_info", 00:06:20.496 "bdev_nvme_disable_controller", 00:06:20.496 "bdev_nvme_enable_controller", 00:06:20.496 "bdev_nvme_reset_controller", 00:06:20.496 "bdev_nvme_get_transport_statistics", 00:06:20.496 "bdev_nvme_apply_firmware", 00:06:20.496 "bdev_nvme_detach_controller", 00:06:20.496 "bdev_nvme_get_controllers", 00:06:20.496 "bdev_nvme_attach_controller", 00:06:20.496 "bdev_nvme_set_hotplug", 00:06:20.496 "bdev_nvme_set_options", 00:06:20.496 "bdev_passthru_delete", 00:06:20.496 "bdev_passthru_create", 00:06:20.496 "bdev_lvol_grow_lvstore", 00:06:20.496 "bdev_lvol_get_lvols", 00:06:20.496 "bdev_lvol_get_lvstores", 00:06:20.496 "bdev_lvol_delete", 00:06:20.496 "bdev_lvol_set_read_only", 00:06:20.496 "bdev_lvol_resize", 00:06:20.496 "bdev_lvol_decouple_parent", 00:06:20.496 "bdev_lvol_inflate", 00:06:20.496 "bdev_lvol_rename", 00:06:20.496 "bdev_lvol_clone_bdev", 00:06:20.496 "bdev_lvol_clone", 00:06:20.496 "bdev_lvol_snapshot", 00:06:20.496 "bdev_lvol_create", 00:06:20.496 "bdev_lvol_delete_lvstore", 00:06:20.496 "bdev_lvol_rename_lvstore", 00:06:20.496 "bdev_lvol_create_lvstore", 00:06:20.496 "bdev_raid_set_options", 00:06:20.497 "bdev_raid_remove_base_bdev", 00:06:20.497 "bdev_raid_add_base_bdev", 00:06:20.497 "bdev_raid_delete", 00:06:20.497 "bdev_raid_create", 00:06:20.497 "bdev_raid_get_bdevs", 00:06:20.497 "bdev_error_inject_error", 00:06:20.497 "bdev_error_delete", 00:06:20.497 "bdev_error_create", 00:06:20.497 "bdev_split_delete", 00:06:20.497 "bdev_split_create", 00:06:20.497 "bdev_delay_delete", 00:06:20.497 "bdev_delay_create", 00:06:20.497 "bdev_delay_update_latency", 00:06:20.497 "bdev_zone_block_delete", 00:06:20.497 "bdev_zone_block_create", 00:06:20.497 "blobfs_create", 00:06:20.497 "blobfs_detect", 00:06:20.497 "blobfs_set_cache_size", 00:06:20.497 "bdev_aio_delete", 00:06:20.497 "bdev_aio_rescan", 00:06:20.497 "bdev_aio_create", 00:06:20.497 "bdev_ftl_set_property", 00:06:20.497 "bdev_ftl_get_properties", 00:06:20.497 "bdev_ftl_get_stats", 00:06:20.497 "bdev_ftl_unmap", 00:06:20.497 "bdev_ftl_unload", 00:06:20.497 "bdev_ftl_delete", 00:06:20.497 "bdev_ftl_load", 00:06:20.497 "bdev_ftl_create", 00:06:20.497 "bdev_virtio_attach_controller", 00:06:20.497 "bdev_virtio_scsi_get_devices", 00:06:20.497 "bdev_virtio_detach_controller", 00:06:20.497 "bdev_virtio_blk_set_hotplug", 00:06:20.497 "bdev_iscsi_delete", 00:06:20.497 "bdev_iscsi_create", 00:06:20.497 "bdev_iscsi_set_options", 00:06:20.497 "bdev_uring_delete", 00:06:20.497 "bdev_uring_create", 00:06:20.497 "accel_error_inject_error", 00:06:20.497 "ioat_scan_accel_module", 00:06:20.497 "dsa_scan_accel_module", 00:06:20.497 "iaa_scan_accel_module", 00:06:20.497 "iscsi_set_options", 00:06:20.497 "iscsi_get_auth_groups", 00:06:20.497 "iscsi_auth_group_remove_secret", 00:06:20.497 "iscsi_auth_group_add_secret", 00:06:20.497 "iscsi_delete_auth_group", 00:06:20.497 "iscsi_create_auth_group", 00:06:20.497 "iscsi_set_discovery_auth", 00:06:20.497 "iscsi_get_options", 00:06:20.497 "iscsi_target_node_request_logout", 00:06:20.497 "iscsi_target_node_set_redirect", 00:06:20.497 "iscsi_target_node_set_auth", 00:06:20.497 "iscsi_target_node_add_lun", 00:06:20.497 "iscsi_get_connections", 00:06:20.497 "iscsi_portal_group_set_auth", 00:06:20.497 "iscsi_start_portal_group", 00:06:20.497 "iscsi_delete_portal_group", 00:06:20.497 "iscsi_create_portal_group", 00:06:20.497 "iscsi_get_portal_groups", 00:06:20.497 "iscsi_delete_target_node", 00:06:20.497 "iscsi_target_node_remove_pg_ig_maps", 00:06:20.497 "iscsi_target_node_add_pg_ig_maps", 00:06:20.497 "iscsi_create_target_node", 00:06:20.497 "iscsi_get_target_nodes", 00:06:20.497 "iscsi_delete_initiator_group", 00:06:20.497 "iscsi_initiator_group_remove_initiators", 00:06:20.497 "iscsi_initiator_group_add_initiators", 00:06:20.497 "iscsi_create_initiator_group", 00:06:20.497 "iscsi_get_initiator_groups", 00:06:20.497 "nvmf_set_crdt", 00:06:20.497 "nvmf_set_config", 00:06:20.497 "nvmf_set_max_subsystems", 00:06:20.497 "nvmf_subsystem_get_listeners", 00:06:20.497 "nvmf_subsystem_get_qpairs", 00:06:20.497 "nvmf_subsystem_get_controllers", 00:06:20.497 "nvmf_get_stats", 00:06:20.497 "nvmf_get_transports", 00:06:20.497 "nvmf_create_transport", 00:06:20.497 "nvmf_get_targets", 00:06:20.497 "nvmf_delete_target", 00:06:20.497 "nvmf_create_target", 00:06:20.497 "nvmf_subsystem_allow_any_host", 00:06:20.497 "nvmf_subsystem_remove_host", 00:06:20.497 "nvmf_subsystem_add_host", 00:06:20.497 "nvmf_subsystem_remove_ns", 00:06:20.497 "nvmf_subsystem_add_ns", 00:06:20.497 "nvmf_subsystem_listener_set_ana_state", 00:06:20.497 "nvmf_discovery_get_referrals", 00:06:20.497 "nvmf_discovery_remove_referral", 00:06:20.497 "nvmf_discovery_add_referral", 00:06:20.497 "nvmf_subsystem_remove_listener", 00:06:20.497 "nvmf_subsystem_add_listener", 00:06:20.497 "nvmf_delete_subsystem", 00:06:20.497 "nvmf_create_subsystem", 00:06:20.497 "nvmf_get_subsystems", 00:06:20.497 "env_dpdk_get_mem_stats", 00:06:20.497 "nbd_get_disks", 00:06:20.497 "nbd_stop_disk", 00:06:20.497 "nbd_start_disk", 00:06:20.497 "ublk_recover_disk", 00:06:20.497 "ublk_get_disks", 00:06:20.497 "ublk_stop_disk", 00:06:20.497 "ublk_start_disk", 00:06:20.497 "ublk_destroy_target", 00:06:20.497 "ublk_create_target", 00:06:20.497 "virtio_blk_create_transport", 00:06:20.497 "virtio_blk_get_transports", 00:06:20.497 "vhost_controller_set_coalescing", 00:06:20.497 "vhost_get_controllers", 00:06:20.497 "vhost_delete_controller", 00:06:20.497 "vhost_create_blk_controller", 00:06:20.497 "vhost_scsi_controller_remove_target", 00:06:20.497 "vhost_scsi_controller_add_target", 00:06:20.497 "vhost_start_scsi_controller", 00:06:20.497 "vhost_create_scsi_controller", 00:06:20.497 "thread_set_cpumask", 00:06:20.497 "framework_get_scheduler", 00:06:20.497 "framework_set_scheduler", 00:06:20.497 "framework_get_reactors", 00:06:20.497 "thread_get_io_channels", 00:06:20.497 "thread_get_pollers", 00:06:20.497 "thread_get_stats", 00:06:20.497 "framework_monitor_context_switch", 00:06:20.497 "spdk_kill_instance", 00:06:20.497 "log_enable_timestamps", 00:06:20.497 "log_get_flags", 00:06:20.497 "log_clear_flag", 00:06:20.497 "log_set_flag", 00:06:20.497 "log_get_level", 00:06:20.497 "log_set_level", 00:06:20.497 "log_get_print_level", 00:06:20.497 "log_set_print_level", 00:06:20.497 "framework_enable_cpumask_locks", 00:06:20.497 "framework_disable_cpumask_locks", 00:06:20.497 "framework_wait_init", 00:06:20.497 "framework_start_init", 00:06:20.497 "scsi_get_devices", 00:06:20.497 "bdev_get_histogram", 00:06:20.497 "bdev_enable_histogram", 00:06:20.497 "bdev_set_qos_limit", 00:06:20.497 "bdev_set_qd_sampling_period", 00:06:20.497 "bdev_get_bdevs", 00:06:20.497 "bdev_reset_iostat", 00:06:20.497 "bdev_get_iostat", 00:06:20.497 "bdev_examine", 00:06:20.497 "bdev_wait_for_examine", 00:06:20.497 "bdev_set_options", 00:06:20.497 "notify_get_notifications", 00:06:20.497 "notify_get_types", 00:06:20.497 "accel_get_stats", 00:06:20.497 "accel_set_options", 00:06:20.497 "accel_set_driver", 00:06:20.497 "accel_crypto_key_destroy", 00:06:20.497 "accel_crypto_keys_get", 00:06:20.497 "accel_crypto_key_create", 00:06:20.497 "accel_assign_opc", 00:06:20.497 "accel_get_module_info", 00:06:20.497 "accel_get_opc_assignments", 00:06:20.497 "vmd_rescan", 00:06:20.497 "vmd_remove_device", 00:06:20.497 "vmd_enable", 00:06:20.497 "sock_set_default_impl", 00:06:20.497 "sock_impl_set_options", 00:06:20.497 "sock_impl_get_options", 00:06:20.497 "iobuf_get_stats", 00:06:20.497 "iobuf_set_options", 00:06:20.497 "framework_get_pci_devices", 00:06:20.497 "framework_get_config", 00:06:20.497 "framework_get_subsystems", 00:06:20.497 "trace_get_info", 00:06:20.497 "trace_get_tpoint_group_mask", 00:06:20.497 "trace_disable_tpoint_group", 00:06:20.497 "trace_enable_tpoint_group", 00:06:20.497 "trace_clear_tpoint_mask", 00:06:20.497 "trace_set_tpoint_mask", 00:06:20.497 "spdk_get_version", 00:06:20.497 "rpc_get_methods" 00:06:20.497 ] 00:06:20.497 14:56:51 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:20.497 14:56:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:20.497 14:56:51 -- common/autotest_common.sh@10 -- # set +x 00:06:20.497 14:56:51 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:20.497 14:56:51 -- spdkcli/tcp.sh@38 -- # killprocess 66364 00:06:20.497 14:56:51 -- common/autotest_common.sh@936 -- # '[' -z 66364 ']' 00:06:20.497 14:56:51 -- common/autotest_common.sh@940 -- # kill -0 66364 00:06:20.497 14:56:51 -- common/autotest_common.sh@941 -- # uname 00:06:20.497 14:56:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:20.497 14:56:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66364 00:06:20.497 killing process with pid 66364 00:06:20.497 14:56:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:20.497 14:56:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:20.497 14:56:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66364' 00:06:20.497 14:56:51 -- common/autotest_common.sh@955 -- # kill 66364 00:06:20.497 14:56:51 -- common/autotest_common.sh@960 -- # wait 66364 00:06:20.757 00:06:20.757 real 0m1.930s 00:06:20.757 user 0m3.757s 00:06:20.757 sys 0m0.420s 00:06:20.757 14:56:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:20.757 14:56:51 -- common/autotest_common.sh@10 -- # set +x 00:06:20.757 ************************************ 00:06:20.757 END TEST spdkcli_tcp 00:06:20.757 ************************************ 00:06:20.757 14:56:51 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:20.757 14:56:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:20.757 14:56:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.757 14:56:51 -- common/autotest_common.sh@10 -- # set +x 00:06:20.757 ************************************ 00:06:20.757 START TEST dpdk_mem_utility 00:06:20.757 ************************************ 00:06:20.757 14:56:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:20.757 * Looking for test storage... 00:06:20.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:20.757 14:56:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:20.757 14:56:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:20.757 14:56:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:21.016 14:56:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:21.016 14:56:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:21.016 14:56:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:21.016 14:56:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:21.016 14:56:51 -- scripts/common.sh@335 -- # IFS=.-: 00:06:21.016 14:56:51 -- scripts/common.sh@335 -- # read -ra ver1 00:06:21.016 14:56:51 -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.016 14:56:51 -- scripts/common.sh@336 -- # read -ra ver2 00:06:21.016 14:56:51 -- scripts/common.sh@337 -- # local 'op=<' 00:06:21.016 14:56:51 -- scripts/common.sh@339 -- # ver1_l=2 00:06:21.016 14:56:51 -- scripts/common.sh@340 -- # ver2_l=1 00:06:21.016 14:56:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:21.016 14:56:51 -- scripts/common.sh@343 -- # case "$op" in 00:06:21.016 14:56:51 -- scripts/common.sh@344 -- # : 1 00:06:21.016 14:56:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:21.016 14:56:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.016 14:56:51 -- scripts/common.sh@364 -- # decimal 1 00:06:21.016 14:56:51 -- scripts/common.sh@352 -- # local d=1 00:06:21.016 14:56:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.016 14:56:51 -- scripts/common.sh@354 -- # echo 1 00:06:21.016 14:56:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:21.016 14:56:51 -- scripts/common.sh@365 -- # decimal 2 00:06:21.016 14:56:51 -- scripts/common.sh@352 -- # local d=2 00:06:21.016 14:56:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.016 14:56:51 -- scripts/common.sh@354 -- # echo 2 00:06:21.016 14:56:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:21.016 14:56:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:21.016 14:56:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:21.016 14:56:51 -- scripts/common.sh@367 -- # return 0 00:06:21.016 14:56:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.016 14:56:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:21.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.016 --rc genhtml_branch_coverage=1 00:06:21.016 --rc genhtml_function_coverage=1 00:06:21.016 --rc genhtml_legend=1 00:06:21.016 --rc geninfo_all_blocks=1 00:06:21.016 --rc geninfo_unexecuted_blocks=1 00:06:21.016 00:06:21.016 ' 00:06:21.016 14:56:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:21.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.016 --rc genhtml_branch_coverage=1 00:06:21.016 --rc genhtml_function_coverage=1 00:06:21.016 --rc genhtml_legend=1 00:06:21.016 --rc geninfo_all_blocks=1 00:06:21.016 --rc geninfo_unexecuted_blocks=1 00:06:21.016 00:06:21.016 ' 00:06:21.016 14:56:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:21.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.016 --rc genhtml_branch_coverage=1 00:06:21.016 --rc genhtml_function_coverage=1 00:06:21.016 --rc genhtml_legend=1 00:06:21.016 --rc geninfo_all_blocks=1 00:06:21.017 --rc geninfo_unexecuted_blocks=1 00:06:21.017 00:06:21.017 ' 00:06:21.017 14:56:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:21.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.017 --rc genhtml_branch_coverage=1 00:06:21.017 --rc genhtml_function_coverage=1 00:06:21.017 --rc genhtml_legend=1 00:06:21.017 --rc geninfo_all_blocks=1 00:06:21.017 --rc geninfo_unexecuted_blocks=1 00:06:21.017 00:06:21.017 ' 00:06:21.017 14:56:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:21.017 14:56:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=66462 00:06:21.017 14:56:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.017 14:56:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 66462 00:06:21.017 14:56:51 -- common/autotest_common.sh@829 -- # '[' -z 66462 ']' 00:06:21.017 14:56:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.017 14:56:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.017 14:56:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.017 14:56:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.017 14:56:51 -- common/autotest_common.sh@10 -- # set +x 00:06:21.017 [2024-11-20 14:56:51.679842] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:21.017 [2024-11-20 14:56:51.680414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66462 ] 00:06:21.017 [2024-11-20 14:56:51.818216] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.275 [2024-11-20 14:56:51.858682] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:21.275 [2024-11-20 14:56:51.858922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.213 14:56:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.213 14:56:52 -- common/autotest_common.sh@862 -- # return 0 00:06:22.213 14:56:52 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:22.213 14:56:52 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:22.213 14:56:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.213 14:56:52 -- common/autotest_common.sh@10 -- # set +x 00:06:22.213 { 00:06:22.213 "filename": "/tmp/spdk_mem_dump.txt" 00:06:22.213 } 00:06:22.213 14:56:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.213 14:56:52 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:22.213 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:22.213 1 heaps totaling size 814.000000 MiB 00:06:22.213 size: 814.000000 MiB heap id: 0 00:06:22.213 end heaps---------- 00:06:22.213 8 mempools totaling size 598.116089 MiB 00:06:22.213 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:22.213 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:22.213 size: 84.521057 MiB name: bdev_io_66462 00:06:22.213 size: 51.011292 MiB name: evtpool_66462 00:06:22.213 size: 50.003479 MiB name: msgpool_66462 00:06:22.213 size: 21.763794 MiB name: PDU_Pool 00:06:22.213 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:22.213 size: 0.026123 MiB name: Session_Pool 00:06:22.213 end mempools------- 00:06:22.213 6 memzones totaling size 4.142822 MiB 00:06:22.213 size: 1.000366 MiB name: RG_ring_0_66462 00:06:22.213 size: 1.000366 MiB name: RG_ring_1_66462 00:06:22.213 size: 1.000366 MiB name: RG_ring_4_66462 00:06:22.213 size: 1.000366 MiB name: RG_ring_5_66462 00:06:22.213 size: 0.125366 MiB name: RG_ring_2_66462 00:06:22.213 size: 0.015991 MiB name: RG_ring_3_66462 00:06:22.213 end memzones------- 00:06:22.213 14:56:52 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:22.213 heap id: 0 total size: 814.000000 MiB number of busy elements: 302 number of free elements: 15 00:06:22.213 list of free elements. size: 12.471558 MiB 00:06:22.213 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:22.213 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:22.213 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:22.213 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:22.213 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:22.213 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:22.213 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:22.213 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:22.213 element at address: 0x200000200000 with size: 0.832825 MiB 00:06:22.213 element at address: 0x20001aa00000 with size: 0.569336 MiB 00:06:22.213 element at address: 0x20000b200000 with size: 0.488892 MiB 00:06:22.213 element at address: 0x200000800000 with size: 0.486145 MiB 00:06:22.213 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:22.213 element at address: 0x200027e00000 with size: 0.395752 MiB 00:06:22.213 element at address: 0x200003a00000 with size: 0.347839 MiB 00:06:22.213 list of standard malloc elements. size: 199.265869 MiB 00:06:22.213 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:22.213 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:22.213 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:22.213 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:22.213 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:22.213 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:22.213 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:22.213 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:22.213 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:22.213 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:22.213 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:22.214 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20000087c740 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a59180 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a59240 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a59300 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a59480 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a59540 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a59600 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a59780 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a59840 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a59900 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:22.214 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:22.214 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:22.214 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:22.214 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:22.214 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:22.215 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:22.215 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:22.215 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e65500 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:22.215 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:22.215 list of memzone associated elements. size: 602.262573 MiB 00:06:22.215 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:22.215 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:22.215 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:22.215 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:22.215 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:22.215 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_66462_0 00:06:22.215 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:22.215 associated memzone info: size: 48.002930 MiB name: MP_evtpool_66462_0 00:06:22.215 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:22.215 associated memzone info: size: 48.002930 MiB name: MP_msgpool_66462_0 00:06:22.215 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:22.215 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:22.215 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:22.215 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:22.215 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:22.215 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_66462 00:06:22.215 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:22.215 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_66462 00:06:22.215 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:22.215 associated memzone info: size: 1.007996 MiB name: MP_evtpool_66462 00:06:22.215 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:22.215 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:22.215 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:22.215 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:22.215 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:22.215 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:22.215 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:22.215 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:22.215 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:22.215 associated memzone info: size: 1.000366 MiB name: RG_ring_0_66462 00:06:22.215 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:22.215 associated memzone info: size: 1.000366 MiB name: RG_ring_1_66462 00:06:22.215 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:22.215 associated memzone info: size: 1.000366 MiB name: RG_ring_4_66462 00:06:22.215 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:22.215 associated memzone info: size: 1.000366 MiB name: RG_ring_5_66462 00:06:22.215 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:22.215 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_66462 00:06:22.215 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:22.215 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:22.215 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:22.215 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:22.215 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:22.215 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:22.215 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:22.215 associated memzone info: size: 0.125366 MiB name: RG_ring_2_66462 00:06:22.215 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:22.215 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:22.215 element at address: 0x200027e65680 with size: 0.023743 MiB 00:06:22.215 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:22.215 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:22.215 associated memzone info: size: 0.015991 MiB name: RG_ring_3_66462 00:06:22.215 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:06:22.215 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:22.215 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:22.215 associated memzone info: size: 0.000183 MiB name: MP_msgpool_66462 00:06:22.215 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:22.215 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_66462 00:06:22.215 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:06:22.215 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:22.215 14:56:52 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:22.215 14:56:52 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 66462 00:06:22.216 14:56:52 -- common/autotest_common.sh@936 -- # '[' -z 66462 ']' 00:06:22.216 14:56:52 -- common/autotest_common.sh@940 -- # kill -0 66462 00:06:22.216 14:56:52 -- common/autotest_common.sh@941 -- # uname 00:06:22.216 14:56:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:22.216 14:56:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66462 00:06:22.216 14:56:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:22.216 14:56:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:22.216 killing process with pid 66462 00:06:22.216 14:56:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66462' 00:06:22.216 14:56:52 -- common/autotest_common.sh@955 -- # kill 66462 00:06:22.216 14:56:52 -- common/autotest_common.sh@960 -- # wait 66462 00:06:22.475 00:06:22.475 real 0m1.666s 00:06:22.475 user 0m1.918s 00:06:22.475 sys 0m0.348s 00:06:22.475 14:56:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:22.475 14:56:53 -- common/autotest_common.sh@10 -- # set +x 00:06:22.475 ************************************ 00:06:22.475 END TEST dpdk_mem_utility 00:06:22.475 ************************************ 00:06:22.475 14:56:53 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:22.475 14:56:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:22.475 14:56:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.475 14:56:53 -- common/autotest_common.sh@10 -- # set +x 00:06:22.475 ************************************ 00:06:22.475 START TEST event 00:06:22.475 ************************************ 00:06:22.475 14:56:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:22.475 * Looking for test storage... 00:06:22.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:22.475 14:56:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:22.475 14:56:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:22.475 14:56:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:22.735 14:56:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:22.735 14:56:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:22.735 14:56:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:22.735 14:56:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:22.735 14:56:53 -- scripts/common.sh@335 -- # IFS=.-: 00:06:22.735 14:56:53 -- scripts/common.sh@335 -- # read -ra ver1 00:06:22.735 14:56:53 -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.735 14:56:53 -- scripts/common.sh@336 -- # read -ra ver2 00:06:22.735 14:56:53 -- scripts/common.sh@337 -- # local 'op=<' 00:06:22.735 14:56:53 -- scripts/common.sh@339 -- # ver1_l=2 00:06:22.735 14:56:53 -- scripts/common.sh@340 -- # ver2_l=1 00:06:22.735 14:56:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:22.735 14:56:53 -- scripts/common.sh@343 -- # case "$op" in 00:06:22.735 14:56:53 -- scripts/common.sh@344 -- # : 1 00:06:22.735 14:56:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:22.735 14:56:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.735 14:56:53 -- scripts/common.sh@364 -- # decimal 1 00:06:22.735 14:56:53 -- scripts/common.sh@352 -- # local d=1 00:06:22.735 14:56:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.735 14:56:53 -- scripts/common.sh@354 -- # echo 1 00:06:22.735 14:56:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:22.735 14:56:53 -- scripts/common.sh@365 -- # decimal 2 00:06:22.735 14:56:53 -- scripts/common.sh@352 -- # local d=2 00:06:22.735 14:56:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.735 14:56:53 -- scripts/common.sh@354 -- # echo 2 00:06:22.735 14:56:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:22.735 14:56:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:22.735 14:56:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:22.735 14:56:53 -- scripts/common.sh@367 -- # return 0 00:06:22.735 14:56:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.735 14:56:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:22.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.735 --rc genhtml_branch_coverage=1 00:06:22.735 --rc genhtml_function_coverage=1 00:06:22.735 --rc genhtml_legend=1 00:06:22.735 --rc geninfo_all_blocks=1 00:06:22.735 --rc geninfo_unexecuted_blocks=1 00:06:22.735 00:06:22.735 ' 00:06:22.735 14:56:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:22.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.735 --rc genhtml_branch_coverage=1 00:06:22.735 --rc genhtml_function_coverage=1 00:06:22.735 --rc genhtml_legend=1 00:06:22.735 --rc geninfo_all_blocks=1 00:06:22.735 --rc geninfo_unexecuted_blocks=1 00:06:22.735 00:06:22.735 ' 00:06:22.735 14:56:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:22.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.735 --rc genhtml_branch_coverage=1 00:06:22.735 --rc genhtml_function_coverage=1 00:06:22.735 --rc genhtml_legend=1 00:06:22.735 --rc geninfo_all_blocks=1 00:06:22.735 --rc geninfo_unexecuted_blocks=1 00:06:22.735 00:06:22.735 ' 00:06:22.735 14:56:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:22.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.735 --rc genhtml_branch_coverage=1 00:06:22.735 --rc genhtml_function_coverage=1 00:06:22.735 --rc genhtml_legend=1 00:06:22.735 --rc geninfo_all_blocks=1 00:06:22.735 --rc geninfo_unexecuted_blocks=1 00:06:22.735 00:06:22.735 ' 00:06:22.735 14:56:53 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:22.735 14:56:53 -- bdev/nbd_common.sh@6 -- # set -e 00:06:22.735 14:56:53 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:22.735 14:56:53 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:22.735 14:56:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.735 14:56:53 -- common/autotest_common.sh@10 -- # set +x 00:06:22.735 ************************************ 00:06:22.735 START TEST event_perf 00:06:22.735 ************************************ 00:06:22.735 14:56:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:22.735 Running I/O for 1 seconds...[2024-11-20 14:56:53.377333] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:22.735 [2024-11-20 14:56:53.377425] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66546 ] 00:06:22.735 [2024-11-20 14:56:53.512726] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:22.995 [2024-11-20 14:56:53.549584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.995 [2024-11-20 14:56:53.549694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.995 [2024-11-20 14:56:53.549776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.995 [2024-11-20 14:56:53.549781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.931 Running I/O for 1 seconds... 00:06:23.931 lcore 0: 193958 00:06:23.931 lcore 1: 193958 00:06:23.931 lcore 2: 193957 00:06:23.931 lcore 3: 193959 00:06:23.931 done. 00:06:23.931 00:06:23.931 real 0m1.257s 00:06:23.931 user 0m4.080s 00:06:23.931 sys 0m0.055s 00:06:23.931 14:56:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.931 14:56:54 -- common/autotest_common.sh@10 -- # set +x 00:06:23.931 ************************************ 00:06:23.931 END TEST event_perf 00:06:23.931 ************************************ 00:06:23.931 14:56:54 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:23.931 14:56:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:23.931 14:56:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.931 14:56:54 -- common/autotest_common.sh@10 -- # set +x 00:06:23.931 ************************************ 00:06:23.931 START TEST event_reactor 00:06:23.931 ************************************ 00:06:23.931 14:56:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:23.931 [2024-11-20 14:56:54.679792] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:23.931 [2024-11-20 14:56:54.679880] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66579 ] 00:06:24.191 [2024-11-20 14:56:54.816985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.191 [2024-11-20 14:56:54.851267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.152 test_start 00:06:25.152 oneshot 00:06:25.152 tick 100 00:06:25.152 tick 100 00:06:25.152 tick 250 00:06:25.152 tick 100 00:06:25.152 tick 100 00:06:25.152 tick 100 00:06:25.152 tick 250 00:06:25.152 tick 500 00:06:25.152 tick 100 00:06:25.152 tick 100 00:06:25.152 tick 250 00:06:25.152 tick 100 00:06:25.152 tick 100 00:06:25.152 test_end 00:06:25.152 00:06:25.152 real 0m1.244s 00:06:25.152 user 0m1.092s 00:06:25.152 sys 0m0.044s 00:06:25.152 14:56:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.152 14:56:55 -- common/autotest_common.sh@10 -- # set +x 00:06:25.152 ************************************ 00:06:25.152 END TEST event_reactor 00:06:25.152 ************************************ 00:06:25.152 14:56:55 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:25.152 14:56:55 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:25.152 14:56:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.152 14:56:55 -- common/autotest_common.sh@10 -- # set +x 00:06:25.412 ************************************ 00:06:25.412 START TEST event_reactor_perf 00:06:25.412 ************************************ 00:06:25.412 14:56:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:25.412 [2024-11-20 14:56:55.977040] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:25.412 [2024-11-20 14:56:55.977167] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66609 ] 00:06:25.412 [2024-11-20 14:56:56.115968] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.412 [2024-11-20 14:56:56.157972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.795 test_start 00:06:26.795 test_end 00:06:26.795 Performance: 345001 events per second 00:06:26.795 00:06:26.795 real 0m1.253s 00:06:26.795 user 0m1.101s 00:06:26.795 sys 0m0.044s 00:06:26.795 14:56:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.795 14:56:57 -- common/autotest_common.sh@10 -- # set +x 00:06:26.795 ************************************ 00:06:26.795 END TEST event_reactor_perf 00:06:26.795 ************************************ 00:06:26.795 14:56:57 -- event/event.sh@49 -- # uname -s 00:06:26.795 14:56:57 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:26.795 14:56:57 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:26.795 14:56:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:26.795 14:56:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.795 14:56:57 -- common/autotest_common.sh@10 -- # set +x 00:06:26.795 ************************************ 00:06:26.795 START TEST event_scheduler 00:06:26.795 ************************************ 00:06:26.795 14:56:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:26.795 * Looking for test storage... 00:06:26.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:26.795 14:56:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:26.795 14:56:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:26.795 14:56:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:26.795 14:56:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:26.795 14:56:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:26.795 14:56:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:26.795 14:56:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:26.795 14:56:57 -- scripts/common.sh@335 -- # IFS=.-: 00:06:26.795 14:56:57 -- scripts/common.sh@335 -- # read -ra ver1 00:06:26.795 14:56:57 -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.795 14:56:57 -- scripts/common.sh@336 -- # read -ra ver2 00:06:26.795 14:56:57 -- scripts/common.sh@337 -- # local 'op=<' 00:06:26.795 14:56:57 -- scripts/common.sh@339 -- # ver1_l=2 00:06:26.795 14:56:57 -- scripts/common.sh@340 -- # ver2_l=1 00:06:26.795 14:56:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:26.795 14:56:57 -- scripts/common.sh@343 -- # case "$op" in 00:06:26.795 14:56:57 -- scripts/common.sh@344 -- # : 1 00:06:26.795 14:56:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:26.795 14:56:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.795 14:56:57 -- scripts/common.sh@364 -- # decimal 1 00:06:26.795 14:56:57 -- scripts/common.sh@352 -- # local d=1 00:06:26.795 14:56:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.795 14:56:57 -- scripts/common.sh@354 -- # echo 1 00:06:26.795 14:56:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:26.795 14:56:57 -- scripts/common.sh@365 -- # decimal 2 00:06:26.795 14:56:57 -- scripts/common.sh@352 -- # local d=2 00:06:26.795 14:56:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.795 14:56:57 -- scripts/common.sh@354 -- # echo 2 00:06:26.795 14:56:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:26.795 14:56:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:26.795 14:56:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:26.795 14:56:57 -- scripts/common.sh@367 -- # return 0 00:06:26.795 14:56:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.795 14:56:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:26.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.795 --rc genhtml_branch_coverage=1 00:06:26.795 --rc genhtml_function_coverage=1 00:06:26.795 --rc genhtml_legend=1 00:06:26.796 --rc geninfo_all_blocks=1 00:06:26.796 --rc geninfo_unexecuted_blocks=1 00:06:26.796 00:06:26.796 ' 00:06:26.796 14:56:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:26.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.796 --rc genhtml_branch_coverage=1 00:06:26.796 --rc genhtml_function_coverage=1 00:06:26.796 --rc genhtml_legend=1 00:06:26.796 --rc geninfo_all_blocks=1 00:06:26.796 --rc geninfo_unexecuted_blocks=1 00:06:26.796 00:06:26.796 ' 00:06:26.796 14:56:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:26.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.796 --rc genhtml_branch_coverage=1 00:06:26.796 --rc genhtml_function_coverage=1 00:06:26.796 --rc genhtml_legend=1 00:06:26.796 --rc geninfo_all_blocks=1 00:06:26.796 --rc geninfo_unexecuted_blocks=1 00:06:26.796 00:06:26.796 ' 00:06:26.796 14:56:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:26.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.796 --rc genhtml_branch_coverage=1 00:06:26.796 --rc genhtml_function_coverage=1 00:06:26.796 --rc genhtml_legend=1 00:06:26.796 --rc geninfo_all_blocks=1 00:06:26.796 --rc geninfo_unexecuted_blocks=1 00:06:26.796 00:06:26.796 ' 00:06:26.796 14:56:57 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:26.796 14:56:57 -- scheduler/scheduler.sh@35 -- # scheduler_pid=66683 00:06:26.796 14:56:57 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.796 14:56:57 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:26.796 14:56:57 -- scheduler/scheduler.sh@37 -- # waitforlisten 66683 00:06:26.796 14:56:57 -- common/autotest_common.sh@829 -- # '[' -z 66683 ']' 00:06:26.796 14:56:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.796 14:56:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.796 14:56:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.796 14:56:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.796 14:56:57 -- common/autotest_common.sh@10 -- # set +x 00:06:26.796 [2024-11-20 14:56:57.497192] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:26.796 [2024-11-20 14:56:57.497844] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66683 ] 00:06:27.055 [2024-11-20 14:56:57.639033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.055 [2024-11-20 14:56:57.685357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.055 [2024-11-20 14:56:57.685459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.055 [2024-11-20 14:56:57.685599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.055 [2024-11-20 14:56:57.685606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.055 14:56:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.055 14:56:57 -- common/autotest_common.sh@862 -- # return 0 00:06:27.055 14:56:57 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:27.055 14:56:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.055 14:56:57 -- common/autotest_common.sh@10 -- # set +x 00:06:27.055 POWER: Env isn't set yet! 00:06:27.055 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:27.055 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:27.055 POWER: Cannot set governor of lcore 0 to userspace 00:06:27.055 POWER: Attempting to initialise PSTAT power management... 00:06:27.055 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:27.055 POWER: Cannot set governor of lcore 0 to performance 00:06:27.055 POWER: Attempting to initialise CPPC power management... 00:06:27.055 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:27.055 POWER: Cannot set governor of lcore 0 to userspace 00:06:27.055 POWER: Attempting to initialise VM power management... 00:06:27.055 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:27.055 POWER: Unable to set Power Management Environment for lcore 0 00:06:27.055 [2024-11-20 14:56:57.755475] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:27.055 [2024-11-20 14:56:57.755743] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:27.055 [2024-11-20 14:56:57.755865] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:27.055 [2024-11-20 14:56:57.755884] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:27.055 [2024-11-20 14:56:57.755895] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:27.055 [2024-11-20 14:56:57.755904] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:27.055 14:56:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.055 14:56:57 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:27.055 14:56:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.055 14:56:57 -- common/autotest_common.sh@10 -- # set +x 00:06:27.055 [2024-11-20 14:56:57.818056] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:27.055 14:56:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.055 14:56:57 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:27.055 14:56:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:27.055 14:56:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.055 14:56:57 -- common/autotest_common.sh@10 -- # set +x 00:06:27.055 ************************************ 00:06:27.055 START TEST scheduler_create_thread 00:06:27.055 ************************************ 00:06:27.055 14:56:57 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:06:27.055 14:56:57 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:27.055 14:56:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.055 14:56:57 -- common/autotest_common.sh@10 -- # set +x 00:06:27.055 2 00:06:27.055 14:56:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.055 14:56:57 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:27.055 14:56:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.055 14:56:57 -- common/autotest_common.sh@10 -- # set +x 00:06:27.055 3 00:06:27.055 14:56:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.055 14:56:57 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:27.055 14:56:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.055 14:56:57 -- common/autotest_common.sh@10 -- # set +x 00:06:27.314 4 00:06:27.314 14:56:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.314 14:56:57 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:27.314 14:56:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.314 14:56:57 -- common/autotest_common.sh@10 -- # set +x 00:06:27.314 5 00:06:27.314 14:56:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.314 14:56:57 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:27.314 14:56:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.314 14:56:57 -- common/autotest_common.sh@10 -- # set +x 00:06:27.314 6 00:06:27.314 14:56:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.314 14:56:57 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:27.314 14:56:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.314 14:56:57 -- common/autotest_common.sh@10 -- # set +x 00:06:27.314 7 00:06:27.314 14:56:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.314 14:56:57 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:27.314 14:56:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.314 14:56:57 -- common/autotest_common.sh@10 -- # set +x 00:06:27.314 8 00:06:27.314 14:56:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.314 14:56:57 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:27.314 14:56:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.314 14:56:57 -- common/autotest_common.sh@10 -- # set +x 00:06:27.314 9 00:06:27.314 14:56:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.314 14:56:57 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:27.315 14:56:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.315 14:56:57 -- common/autotest_common.sh@10 -- # set +x 00:06:27.315 10 00:06:27.315 14:56:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.315 14:56:57 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:27.315 14:56:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.315 14:56:57 -- common/autotest_common.sh@10 -- # set +x 00:06:27.315 14:56:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.315 14:56:57 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:27.315 14:56:57 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:27.315 14:56:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.315 14:56:57 -- common/autotest_common.sh@10 -- # set +x 00:06:27.315 14:56:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.315 14:56:57 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:27.315 14:56:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.315 14:56:57 -- common/autotest_common.sh@10 -- # set +x 00:06:28.690 14:56:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.690 14:56:59 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:28.690 14:56:59 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:28.690 14:56:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.690 14:56:59 -- common/autotest_common.sh@10 -- # set +x 00:06:30.068 14:57:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.068 00:06:30.068 real 0m2.613s 00:06:30.068 user 0m0.019s 00:06:30.068 sys 0m0.007s 00:06:30.068 14:57:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.068 ************************************ 00:06:30.068 END TEST scheduler_create_thread 00:06:30.068 14:57:00 -- common/autotest_common.sh@10 -- # set +x 00:06:30.068 ************************************ 00:06:30.068 14:57:00 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:30.068 14:57:00 -- scheduler/scheduler.sh@46 -- # killprocess 66683 00:06:30.068 14:57:00 -- common/autotest_common.sh@936 -- # '[' -z 66683 ']' 00:06:30.068 14:57:00 -- common/autotest_common.sh@940 -- # kill -0 66683 00:06:30.068 14:57:00 -- common/autotest_common.sh@941 -- # uname 00:06:30.068 14:57:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:30.068 14:57:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66683 00:06:30.068 14:57:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:30.068 14:57:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:30.068 killing process with pid 66683 00:06:30.068 14:57:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66683' 00:06:30.068 14:57:00 -- common/autotest_common.sh@955 -- # kill 66683 00:06:30.068 14:57:00 -- common/autotest_common.sh@960 -- # wait 66683 00:06:30.327 [2024-11-20 14:57:00.922084] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:30.327 00:06:30.327 real 0m3.821s 00:06:30.327 user 0m5.625s 00:06:30.327 sys 0m0.325s 00:06:30.327 14:57:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.328 14:57:01 -- common/autotest_common.sh@10 -- # set +x 00:06:30.328 ************************************ 00:06:30.328 END TEST event_scheduler 00:06:30.328 ************************************ 00:06:30.328 14:57:01 -- event/event.sh@51 -- # modprobe -n nbd 00:06:30.328 14:57:01 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:30.328 14:57:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:30.328 14:57:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.328 14:57:01 -- common/autotest_common.sh@10 -- # set +x 00:06:30.587 ************************************ 00:06:30.587 START TEST app_repeat 00:06:30.587 ************************************ 00:06:30.587 14:57:01 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:06:30.587 14:57:01 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.587 14:57:01 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.587 14:57:01 -- event/event.sh@13 -- # local nbd_list 00:06:30.587 14:57:01 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.587 14:57:01 -- event/event.sh@14 -- # local bdev_list 00:06:30.587 14:57:01 -- event/event.sh@15 -- # local repeat_times=4 00:06:30.587 14:57:01 -- event/event.sh@17 -- # modprobe nbd 00:06:30.587 14:57:01 -- event/event.sh@19 -- # repeat_pid=66770 00:06:30.587 14:57:01 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:30.587 14:57:01 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:30.587 Process app_repeat pid: 66770 00:06:30.587 14:57:01 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 66770' 00:06:30.587 14:57:01 -- event/event.sh@23 -- # for i in {0..2} 00:06:30.587 spdk_app_start Round 0 00:06:30.587 14:57:01 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:30.587 14:57:01 -- event/event.sh@25 -- # waitforlisten 66770 /var/tmp/spdk-nbd.sock 00:06:30.587 14:57:01 -- common/autotest_common.sh@829 -- # '[' -z 66770 ']' 00:06:30.587 14:57:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:30.587 14:57:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:30.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:30.587 14:57:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:30.587 14:57:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:30.587 14:57:01 -- common/autotest_common.sh@10 -- # set +x 00:06:30.587 [2024-11-20 14:57:01.165913] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:30.587 [2024-11-20 14:57:01.166003] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66770 ] 00:06:30.587 [2024-11-20 14:57:01.306079] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.587 [2024-11-20 14:57:01.347514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.587 [2024-11-20 14:57:01.347501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.846 14:57:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.846 14:57:01 -- common/autotest_common.sh@862 -- # return 0 00:06:30.846 14:57:01 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.105 Malloc0 00:06:31.105 14:57:01 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.363 Malloc1 00:06:31.363 14:57:02 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.363 14:57:02 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.363 14:57:02 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.363 14:57:02 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:31.363 14:57:02 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.363 14:57:02 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:31.363 14:57:02 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.363 14:57:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.363 14:57:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.363 14:57:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:31.363 14:57:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.363 14:57:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:31.363 14:57:02 -- bdev/nbd_common.sh@12 -- # local i 00:06:31.363 14:57:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:31.363 14:57:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.363 14:57:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:31.622 /dev/nbd0 00:06:31.622 14:57:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:31.622 14:57:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:31.622 14:57:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:31.622 14:57:02 -- common/autotest_common.sh@867 -- # local i 00:06:31.622 14:57:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:31.622 14:57:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:31.622 14:57:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:31.623 14:57:02 -- common/autotest_common.sh@871 -- # break 00:06:31.623 14:57:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:31.623 14:57:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:31.623 14:57:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.623 1+0 records in 00:06:31.623 1+0 records out 00:06:31.623 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029527 s, 13.9 MB/s 00:06:31.623 14:57:02 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.623 14:57:02 -- common/autotest_common.sh@884 -- # size=4096 00:06:31.623 14:57:02 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.623 14:57:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:31.623 14:57:02 -- common/autotest_common.sh@887 -- # return 0 00:06:31.623 14:57:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.623 14:57:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.623 14:57:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:31.881 /dev/nbd1 00:06:31.881 14:57:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:31.881 14:57:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:31.881 14:57:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:31.881 14:57:02 -- common/autotest_common.sh@867 -- # local i 00:06:31.881 14:57:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:31.881 14:57:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:31.881 14:57:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:31.881 14:57:02 -- common/autotest_common.sh@871 -- # break 00:06:31.881 14:57:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:31.881 14:57:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:31.881 14:57:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.881 1+0 records in 00:06:31.881 1+0 records out 00:06:31.881 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028777 s, 14.2 MB/s 00:06:31.881 14:57:02 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.881 14:57:02 -- common/autotest_common.sh@884 -- # size=4096 00:06:31.881 14:57:02 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:31.881 14:57:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:31.881 14:57:02 -- common/autotest_common.sh@887 -- # return 0 00:06:31.881 14:57:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.881 14:57:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.881 14:57:02 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.881 14:57:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.881 14:57:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.450 14:57:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:32.450 { 00:06:32.450 "nbd_device": "/dev/nbd0", 00:06:32.450 "bdev_name": "Malloc0" 00:06:32.450 }, 00:06:32.450 { 00:06:32.450 "nbd_device": "/dev/nbd1", 00:06:32.450 "bdev_name": "Malloc1" 00:06:32.450 } 00:06:32.450 ]' 00:06:32.450 14:57:02 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.450 { 00:06:32.450 "nbd_device": "/dev/nbd0", 00:06:32.450 "bdev_name": "Malloc0" 00:06:32.450 }, 00:06:32.450 { 00:06:32.450 "nbd_device": "/dev/nbd1", 00:06:32.450 "bdev_name": "Malloc1" 00:06:32.450 } 00:06:32.450 ]' 00:06:32.450 14:57:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:32.450 /dev/nbd1' 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:32.450 /dev/nbd1' 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@65 -- # count=2 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@95 -- # count=2 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:32.450 256+0 records in 00:06:32.450 256+0 records out 00:06:32.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104858 s, 100 MB/s 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.450 256+0 records in 00:06:32.450 256+0 records out 00:06:32.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275794 s, 38.0 MB/s 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.450 256+0 records in 00:06:32.450 256+0 records out 00:06:32.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255925 s, 41.0 MB/s 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@51 -- # local i 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.450 14:57:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:32.709 14:57:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:32.709 14:57:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:32.709 14:57:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:32.709 14:57:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.709 14:57:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.709 14:57:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:32.709 14:57:03 -- bdev/nbd_common.sh@41 -- # break 00:06:32.709 14:57:03 -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.709 14:57:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.709 14:57:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:32.968 14:57:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:32.968 14:57:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:32.968 14:57:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:32.968 14:57:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.968 14:57:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.968 14:57:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:32.968 14:57:03 -- bdev/nbd_common.sh@41 -- # break 00:06:32.968 14:57:03 -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.968 14:57:03 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.968 14:57:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.968 14:57:03 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.227 14:57:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:33.227 14:57:03 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.227 14:57:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.486 14:57:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:33.486 14:57:04 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:33.486 14:57:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.486 14:57:04 -- bdev/nbd_common.sh@65 -- # true 00:06:33.486 14:57:04 -- bdev/nbd_common.sh@65 -- # count=0 00:06:33.486 14:57:04 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:33.486 14:57:04 -- bdev/nbd_common.sh@104 -- # count=0 00:06:33.486 14:57:04 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:33.486 14:57:04 -- bdev/nbd_common.sh@109 -- # return 0 00:06:33.486 14:57:04 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:33.745 14:57:04 -- event/event.sh@35 -- # sleep 3 00:06:33.745 [2024-11-20 14:57:04.493605] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.745 [2024-11-20 14:57:04.528400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.745 [2024-11-20 14:57:04.528413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.004 [2024-11-20 14:57:04.561055] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:34.004 [2024-11-20 14:57:04.561119] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:37.291 14:57:07 -- event/event.sh@23 -- # for i in {0..2} 00:06:37.291 spdk_app_start Round 1 00:06:37.291 14:57:07 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:37.291 14:57:07 -- event/event.sh@25 -- # waitforlisten 66770 /var/tmp/spdk-nbd.sock 00:06:37.291 14:57:07 -- common/autotest_common.sh@829 -- # '[' -z 66770 ']' 00:06:37.291 14:57:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.291 14:57:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.291 14:57:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.291 14:57:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.291 14:57:07 -- common/autotest_common.sh@10 -- # set +x 00:06:37.291 14:57:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.291 14:57:07 -- common/autotest_common.sh@862 -- # return 0 00:06:37.291 14:57:07 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.291 Malloc0 00:06:37.291 14:57:07 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.550 Malloc1 00:06:37.550 14:57:08 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.550 14:57:08 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.550 14:57:08 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.550 14:57:08 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:37.550 14:57:08 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.550 14:57:08 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:37.550 14:57:08 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.550 14:57:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.550 14:57:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.550 14:57:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:37.550 14:57:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.550 14:57:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:37.550 14:57:08 -- bdev/nbd_common.sh@12 -- # local i 00:06:37.550 14:57:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:37.550 14:57:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.550 14:57:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:37.809 /dev/nbd0 00:06:37.809 14:57:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:37.809 14:57:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:37.809 14:57:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:37.809 14:57:08 -- common/autotest_common.sh@867 -- # local i 00:06:37.809 14:57:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:37.809 14:57:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:37.809 14:57:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:37.809 14:57:08 -- common/autotest_common.sh@871 -- # break 00:06:37.809 14:57:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:37.809 14:57:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:37.809 14:57:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.809 1+0 records in 00:06:37.809 1+0 records out 00:06:37.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330414 s, 12.4 MB/s 00:06:37.809 14:57:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.809 14:57:08 -- common/autotest_common.sh@884 -- # size=4096 00:06:37.809 14:57:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.809 14:57:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:37.809 14:57:08 -- common/autotest_common.sh@887 -- # return 0 00:06:37.809 14:57:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.809 14:57:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.809 14:57:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:38.377 /dev/nbd1 00:06:38.377 14:57:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:38.377 14:57:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:38.377 14:57:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:38.377 14:57:08 -- common/autotest_common.sh@867 -- # local i 00:06:38.377 14:57:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:38.377 14:57:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:38.377 14:57:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:38.377 14:57:08 -- common/autotest_common.sh@871 -- # break 00:06:38.377 14:57:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:38.377 14:57:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:38.377 14:57:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.377 1+0 records in 00:06:38.377 1+0 records out 00:06:38.377 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335535 s, 12.2 MB/s 00:06:38.377 14:57:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:38.377 14:57:08 -- common/autotest_common.sh@884 -- # size=4096 00:06:38.377 14:57:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:38.377 14:57:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:38.377 14:57:08 -- common/autotest_common.sh@887 -- # return 0 00:06:38.377 14:57:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.377 14:57:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.377 14:57:08 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.377 14:57:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.377 14:57:08 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.634 14:57:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:38.634 { 00:06:38.634 "nbd_device": "/dev/nbd0", 00:06:38.634 "bdev_name": "Malloc0" 00:06:38.634 }, 00:06:38.634 { 00:06:38.634 "nbd_device": "/dev/nbd1", 00:06:38.634 "bdev_name": "Malloc1" 00:06:38.634 } 00:06:38.634 ]' 00:06:38.634 14:57:09 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:38.634 { 00:06:38.634 "nbd_device": "/dev/nbd0", 00:06:38.634 "bdev_name": "Malloc0" 00:06:38.634 }, 00:06:38.634 { 00:06:38.634 "nbd_device": "/dev/nbd1", 00:06:38.634 "bdev_name": "Malloc1" 00:06:38.634 } 00:06:38.634 ]' 00:06:38.634 14:57:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.634 14:57:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:38.634 /dev/nbd1' 00:06:38.634 14:57:09 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:38.634 /dev/nbd1' 00:06:38.634 14:57:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.634 14:57:09 -- bdev/nbd_common.sh@65 -- # count=2 00:06:38.634 14:57:09 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:38.634 14:57:09 -- bdev/nbd_common.sh@95 -- # count=2 00:06:38.634 14:57:09 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:38.634 14:57:09 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:38.634 14:57:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.634 14:57:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.634 14:57:09 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:38.634 14:57:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:38.634 14:57:09 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:38.634 14:57:09 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:38.634 256+0 records in 00:06:38.634 256+0 records out 00:06:38.635 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00664011 s, 158 MB/s 00:06:38.635 14:57:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.635 14:57:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:38.635 256+0 records in 00:06:38.635 256+0 records out 00:06:38.635 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203232 s, 51.6 MB/s 00:06:38.635 14:57:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.635 14:57:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:38.635 256+0 records in 00:06:38.635 256+0 records out 00:06:38.635 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0315802 s, 33.2 MB/s 00:06:38.635 14:57:09 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:38.635 14:57:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.635 14:57:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.635 14:57:09 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:38.635 14:57:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:38.635 14:57:09 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:38.635 14:57:09 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:38.635 14:57:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.635 14:57:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:38.635 14:57:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.635 14:57:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:38.635 14:57:09 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:38.635 14:57:09 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:38.635 14:57:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.635 14:57:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.635 14:57:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:38.635 14:57:09 -- bdev/nbd_common.sh@51 -- # local i 00:06:38.635 14:57:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.635 14:57:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:38.892 14:57:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:38.892 14:57:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:38.892 14:57:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:38.892 14:57:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.892 14:57:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.892 14:57:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:38.892 14:57:09 -- bdev/nbd_common.sh@41 -- # break 00:06:38.892 14:57:09 -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.892 14:57:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.892 14:57:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:39.516 14:57:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:39.516 14:57:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:39.516 14:57:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:39.516 14:57:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.516 14:57:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.516 14:57:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:39.516 14:57:09 -- bdev/nbd_common.sh@41 -- # break 00:06:39.516 14:57:09 -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.516 14:57:09 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.516 14:57:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.516 14:57:09 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.516 14:57:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:39.516 14:57:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.516 14:57:10 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:39.774 14:57:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:39.774 14:57:10 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:39.774 14:57:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.774 14:57:10 -- bdev/nbd_common.sh@65 -- # true 00:06:39.774 14:57:10 -- bdev/nbd_common.sh@65 -- # count=0 00:06:39.774 14:57:10 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:39.774 14:57:10 -- bdev/nbd_common.sh@104 -- # count=0 00:06:39.774 14:57:10 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:39.774 14:57:10 -- bdev/nbd_common.sh@109 -- # return 0 00:06:39.774 14:57:10 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:40.031 14:57:10 -- event/event.sh@35 -- # sleep 3 00:06:40.031 [2024-11-20 14:57:10.747134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.031 [2024-11-20 14:57:10.782307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.031 [2024-11-20 14:57:10.782318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.031 [2024-11-20 14:57:10.812185] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:40.031 [2024-11-20 14:57:10.812247] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:43.310 spdk_app_start Round 2 00:06:43.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:43.310 14:57:13 -- event/event.sh@23 -- # for i in {0..2} 00:06:43.310 14:57:13 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:43.310 14:57:13 -- event/event.sh@25 -- # waitforlisten 66770 /var/tmp/spdk-nbd.sock 00:06:43.310 14:57:13 -- common/autotest_common.sh@829 -- # '[' -z 66770 ']' 00:06:43.310 14:57:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:43.310 14:57:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.310 14:57:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:43.310 14:57:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.310 14:57:13 -- common/autotest_common.sh@10 -- # set +x 00:06:43.310 14:57:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.310 14:57:14 -- common/autotest_common.sh@862 -- # return 0 00:06:43.310 14:57:14 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:43.570 Malloc0 00:06:43.570 14:57:14 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.139 Malloc1 00:06:44.139 14:57:14 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.139 14:57:14 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.139 14:57:14 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.139 14:57:14 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:44.139 14:57:14 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.139 14:57:14 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:44.139 14:57:14 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.139 14:57:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.139 14:57:14 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.139 14:57:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:44.139 14:57:14 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.139 14:57:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:44.139 14:57:14 -- bdev/nbd_common.sh@12 -- # local i 00:06:44.139 14:57:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:44.139 14:57:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.139 14:57:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:44.397 /dev/nbd0 00:06:44.657 14:57:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:44.657 14:57:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:44.657 14:57:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:44.657 14:57:15 -- common/autotest_common.sh@867 -- # local i 00:06:44.657 14:57:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:44.657 14:57:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:44.657 14:57:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:44.657 14:57:15 -- common/autotest_common.sh@871 -- # break 00:06:44.657 14:57:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:44.657 14:57:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:44.657 14:57:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.657 1+0 records in 00:06:44.657 1+0 records out 00:06:44.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209426 s, 19.6 MB/s 00:06:44.657 14:57:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:44.657 14:57:15 -- common/autotest_common.sh@884 -- # size=4096 00:06:44.657 14:57:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:44.657 14:57:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:44.657 14:57:15 -- common/autotest_common.sh@887 -- # return 0 00:06:44.657 14:57:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.657 14:57:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.657 14:57:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:44.915 /dev/nbd1 00:06:44.915 14:57:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:44.915 14:57:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:44.915 14:57:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:44.915 14:57:15 -- common/autotest_common.sh@867 -- # local i 00:06:44.915 14:57:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:44.915 14:57:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:44.915 14:57:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:44.915 14:57:15 -- common/autotest_common.sh@871 -- # break 00:06:44.915 14:57:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:44.915 14:57:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:44.915 14:57:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.915 1+0 records in 00:06:44.915 1+0 records out 00:06:44.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527184 s, 7.8 MB/s 00:06:44.915 14:57:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:44.915 14:57:15 -- common/autotest_common.sh@884 -- # size=4096 00:06:44.915 14:57:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:44.915 14:57:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:44.915 14:57:15 -- common/autotest_common.sh@887 -- # return 0 00:06:44.915 14:57:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.915 14:57:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.915 14:57:15 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.915 14:57:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.915 14:57:15 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:45.173 { 00:06:45.173 "nbd_device": "/dev/nbd0", 00:06:45.173 "bdev_name": "Malloc0" 00:06:45.173 }, 00:06:45.173 { 00:06:45.173 "nbd_device": "/dev/nbd1", 00:06:45.173 "bdev_name": "Malloc1" 00:06:45.173 } 00:06:45.173 ]' 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:45.173 { 00:06:45.173 "nbd_device": "/dev/nbd0", 00:06:45.173 "bdev_name": "Malloc0" 00:06:45.173 }, 00:06:45.173 { 00:06:45.173 "nbd_device": "/dev/nbd1", 00:06:45.173 "bdev_name": "Malloc1" 00:06:45.173 } 00:06:45.173 ]' 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:45.173 /dev/nbd1' 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:45.173 /dev/nbd1' 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@65 -- # count=2 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@95 -- # count=2 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:45.173 256+0 records in 00:06:45.173 256+0 records out 00:06:45.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00755167 s, 139 MB/s 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:45.173 256+0 records in 00:06:45.173 256+0 records out 00:06:45.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211047 s, 49.7 MB/s 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:45.173 256+0 records in 00:06:45.173 256+0 records out 00:06:45.173 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0325555 s, 32.2 MB/s 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@51 -- # local i 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.173 14:57:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:45.739 14:57:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:45.739 14:57:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:45.739 14:57:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:45.739 14:57:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.739 14:57:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.739 14:57:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:45.739 14:57:16 -- bdev/nbd_common.sh@41 -- # break 00:06:45.739 14:57:16 -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.739 14:57:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.739 14:57:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:45.739 14:57:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:45.739 14:57:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:45.739 14:57:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:45.739 14:57:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.739 14:57:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.739 14:57:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:45.739 14:57:16 -- bdev/nbd_common.sh@41 -- # break 00:06:45.739 14:57:16 -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.739 14:57:16 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.739 14:57:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.739 14:57:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.306 14:57:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:46.306 14:57:16 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:46.306 14:57:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.306 14:57:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:46.306 14:57:16 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:46.306 14:57:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.306 14:57:16 -- bdev/nbd_common.sh@65 -- # true 00:06:46.306 14:57:16 -- bdev/nbd_common.sh@65 -- # count=0 00:06:46.306 14:57:16 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:46.306 14:57:16 -- bdev/nbd_common.sh@104 -- # count=0 00:06:46.306 14:57:16 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:46.306 14:57:16 -- bdev/nbd_common.sh@109 -- # return 0 00:06:46.306 14:57:16 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:46.565 14:57:17 -- event/event.sh@35 -- # sleep 3 00:06:46.565 [2024-11-20 14:57:17.329010] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:46.565 [2024-11-20 14:57:17.364685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.565 [2024-11-20 14:57:17.364697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.824 [2024-11-20 14:57:17.395841] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:46.824 [2024-11-20 14:57:17.395902] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:50.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.110 14:57:20 -- event/event.sh@38 -- # waitforlisten 66770 /var/tmp/spdk-nbd.sock 00:06:50.110 14:57:20 -- common/autotest_common.sh@829 -- # '[' -z 66770 ']' 00:06:50.110 14:57:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.110 14:57:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.110 14:57:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.110 14:57:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.110 14:57:20 -- common/autotest_common.sh@10 -- # set +x 00:06:50.110 14:57:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.110 14:57:20 -- common/autotest_common.sh@862 -- # return 0 00:06:50.110 14:57:20 -- event/event.sh@39 -- # killprocess 66770 00:06:50.110 14:57:20 -- common/autotest_common.sh@936 -- # '[' -z 66770 ']' 00:06:50.110 14:57:20 -- common/autotest_common.sh@940 -- # kill -0 66770 00:06:50.110 14:57:20 -- common/autotest_common.sh@941 -- # uname 00:06:50.110 14:57:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:50.110 14:57:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66770 00:06:50.110 14:57:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:50.110 14:57:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:50.110 killing process with pid 66770 00:06:50.110 14:57:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66770' 00:06:50.110 14:57:20 -- common/autotest_common.sh@955 -- # kill 66770 00:06:50.110 14:57:20 -- common/autotest_common.sh@960 -- # wait 66770 00:06:50.110 spdk_app_start is called in Round 0. 00:06:50.110 Shutdown signal received, stop current app iteration 00:06:50.110 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:50.110 spdk_app_start is called in Round 1. 00:06:50.110 Shutdown signal received, stop current app iteration 00:06:50.110 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:50.110 spdk_app_start is called in Round 2. 00:06:50.110 Shutdown signal received, stop current app iteration 00:06:50.110 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:06:50.110 spdk_app_start is called in Round 3. 00:06:50.110 Shutdown signal received, stop current app iteration 00:06:50.110 14:57:20 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:50.110 14:57:20 -- event/event.sh@42 -- # return 0 00:06:50.110 00:06:50.110 real 0m19.562s 00:06:50.110 user 0m45.281s 00:06:50.110 sys 0m2.730s 00:06:50.110 14:57:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.110 ************************************ 00:06:50.110 END TEST app_repeat 00:06:50.110 ************************************ 00:06:50.110 14:57:20 -- common/autotest_common.sh@10 -- # set +x 00:06:50.110 14:57:20 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:50.110 14:57:20 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:50.110 14:57:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:50.110 14:57:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.110 14:57:20 -- common/autotest_common.sh@10 -- # set +x 00:06:50.110 ************************************ 00:06:50.110 START TEST cpu_locks 00:06:50.110 ************************************ 00:06:50.110 14:57:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:50.110 * Looking for test storage... 00:06:50.110 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:50.110 14:57:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:50.110 14:57:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:50.110 14:57:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:50.110 14:57:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:50.110 14:57:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:50.110 14:57:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:50.110 14:57:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:50.110 14:57:20 -- scripts/common.sh@335 -- # IFS=.-: 00:06:50.110 14:57:20 -- scripts/common.sh@335 -- # read -ra ver1 00:06:50.110 14:57:20 -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.110 14:57:20 -- scripts/common.sh@336 -- # read -ra ver2 00:06:50.110 14:57:20 -- scripts/common.sh@337 -- # local 'op=<' 00:06:50.110 14:57:20 -- scripts/common.sh@339 -- # ver1_l=2 00:06:50.110 14:57:20 -- scripts/common.sh@340 -- # ver2_l=1 00:06:50.110 14:57:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:50.110 14:57:20 -- scripts/common.sh@343 -- # case "$op" in 00:06:50.110 14:57:20 -- scripts/common.sh@344 -- # : 1 00:06:50.110 14:57:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:50.110 14:57:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.110 14:57:20 -- scripts/common.sh@364 -- # decimal 1 00:06:50.369 14:57:20 -- scripts/common.sh@352 -- # local d=1 00:06:50.369 14:57:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.369 14:57:20 -- scripts/common.sh@354 -- # echo 1 00:06:50.369 14:57:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:50.369 14:57:20 -- scripts/common.sh@365 -- # decimal 2 00:06:50.369 14:57:20 -- scripts/common.sh@352 -- # local d=2 00:06:50.369 14:57:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.369 14:57:20 -- scripts/common.sh@354 -- # echo 2 00:06:50.369 14:57:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:50.369 14:57:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:50.369 14:57:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:50.369 14:57:20 -- scripts/common.sh@367 -- # return 0 00:06:50.369 14:57:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.369 14:57:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:50.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.369 --rc genhtml_branch_coverage=1 00:06:50.369 --rc genhtml_function_coverage=1 00:06:50.369 --rc genhtml_legend=1 00:06:50.369 --rc geninfo_all_blocks=1 00:06:50.369 --rc geninfo_unexecuted_blocks=1 00:06:50.369 00:06:50.369 ' 00:06:50.369 14:57:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:50.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.369 --rc genhtml_branch_coverage=1 00:06:50.369 --rc genhtml_function_coverage=1 00:06:50.369 --rc genhtml_legend=1 00:06:50.369 --rc geninfo_all_blocks=1 00:06:50.369 --rc geninfo_unexecuted_blocks=1 00:06:50.369 00:06:50.369 ' 00:06:50.369 14:57:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:50.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.369 --rc genhtml_branch_coverage=1 00:06:50.369 --rc genhtml_function_coverage=1 00:06:50.369 --rc genhtml_legend=1 00:06:50.369 --rc geninfo_all_blocks=1 00:06:50.369 --rc geninfo_unexecuted_blocks=1 00:06:50.369 00:06:50.369 ' 00:06:50.369 14:57:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:50.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.369 --rc genhtml_branch_coverage=1 00:06:50.369 --rc genhtml_function_coverage=1 00:06:50.369 --rc genhtml_legend=1 00:06:50.369 --rc geninfo_all_blocks=1 00:06:50.369 --rc geninfo_unexecuted_blocks=1 00:06:50.369 00:06:50.369 ' 00:06:50.369 14:57:20 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:50.369 14:57:20 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:50.369 14:57:20 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:50.369 14:57:20 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:50.369 14:57:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:50.369 14:57:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.369 14:57:20 -- common/autotest_common.sh@10 -- # set +x 00:06:50.369 ************************************ 00:06:50.369 START TEST default_locks 00:06:50.370 ************************************ 00:06:50.370 14:57:20 -- common/autotest_common.sh@1114 -- # default_locks 00:06:50.370 14:57:20 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=67213 00:06:50.370 14:57:20 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:50.370 14:57:20 -- event/cpu_locks.sh@47 -- # waitforlisten 67213 00:06:50.370 14:57:20 -- common/autotest_common.sh@829 -- # '[' -z 67213 ']' 00:06:50.370 14:57:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.370 14:57:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.370 14:57:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.370 14:57:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.370 14:57:20 -- common/autotest_common.sh@10 -- # set +x 00:06:50.370 [2024-11-20 14:57:21.002490] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:50.370 [2024-11-20 14:57:21.002905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67213 ] 00:06:50.370 [2024-11-20 14:57:21.144343] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.628 [2024-11-20 14:57:21.182242] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:50.628 [2024-11-20 14:57:21.182621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.564 14:57:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.564 14:57:21 -- common/autotest_common.sh@862 -- # return 0 00:06:51.564 14:57:21 -- event/cpu_locks.sh@49 -- # locks_exist 67213 00:06:51.564 14:57:22 -- event/cpu_locks.sh@22 -- # lslocks -p 67213 00:06:51.564 14:57:22 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.834 14:57:22 -- event/cpu_locks.sh@50 -- # killprocess 67213 00:06:51.834 14:57:22 -- common/autotest_common.sh@936 -- # '[' -z 67213 ']' 00:06:51.834 14:57:22 -- common/autotest_common.sh@940 -- # kill -0 67213 00:06:51.834 14:57:22 -- common/autotest_common.sh@941 -- # uname 00:06:51.834 14:57:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:51.834 14:57:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67213 00:06:51.834 killing process with pid 67213 00:06:51.834 14:57:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:51.834 14:57:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:51.834 14:57:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67213' 00:06:51.834 14:57:22 -- common/autotest_common.sh@955 -- # kill 67213 00:06:51.834 14:57:22 -- common/autotest_common.sh@960 -- # wait 67213 00:06:52.096 14:57:22 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 67213 00:06:52.096 14:57:22 -- common/autotest_common.sh@650 -- # local es=0 00:06:52.096 14:57:22 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67213 00:06:52.096 14:57:22 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:52.096 14:57:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.096 14:57:22 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:52.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.096 ERROR: process (pid: 67213) is no longer running 00:06:52.096 14:57:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.096 14:57:22 -- common/autotest_common.sh@653 -- # waitforlisten 67213 00:06:52.096 14:57:22 -- common/autotest_common.sh@829 -- # '[' -z 67213 ']' 00:06:52.096 14:57:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.096 14:57:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.096 14:57:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.096 14:57:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.096 14:57:22 -- common/autotest_common.sh@10 -- # set +x 00:06:52.096 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67213) - No such process 00:06:52.096 14:57:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.097 14:57:22 -- common/autotest_common.sh@862 -- # return 1 00:06:52.097 14:57:22 -- common/autotest_common.sh@653 -- # es=1 00:06:52.097 14:57:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.097 14:57:22 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:52.097 14:57:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.097 14:57:22 -- event/cpu_locks.sh@54 -- # no_locks 00:06:52.097 14:57:22 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:52.097 14:57:22 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:52.097 14:57:22 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:52.097 00:06:52.097 real 0m1.787s 00:06:52.097 user 0m2.071s 00:06:52.097 sys 0m0.470s 00:06:52.097 14:57:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.097 14:57:22 -- common/autotest_common.sh@10 -- # set +x 00:06:52.097 ************************************ 00:06:52.097 END TEST default_locks 00:06:52.097 ************************************ 00:06:52.097 14:57:22 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:52.097 14:57:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:52.097 14:57:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.097 14:57:22 -- common/autotest_common.sh@10 -- # set +x 00:06:52.097 ************************************ 00:06:52.097 START TEST default_locks_via_rpc 00:06:52.097 ************************************ 00:06:52.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.097 14:57:22 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:06:52.097 14:57:22 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=67265 00:06:52.097 14:57:22 -- event/cpu_locks.sh@63 -- # waitforlisten 67265 00:06:52.097 14:57:22 -- common/autotest_common.sh@829 -- # '[' -z 67265 ']' 00:06:52.097 14:57:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.097 14:57:22 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:52.097 14:57:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.097 14:57:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.097 14:57:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.097 14:57:22 -- common/autotest_common.sh@10 -- # set +x 00:06:52.097 [2024-11-20 14:57:22.823265] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:52.097 [2024-11-20 14:57:22.823636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67265 ] 00:06:52.393 [2024-11-20 14:57:22.961053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.393 [2024-11-20 14:57:23.001791] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:52.393 [2024-11-20 14:57:23.001980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.353 14:57:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.353 14:57:23 -- common/autotest_common.sh@862 -- # return 0 00:06:53.353 14:57:23 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:53.353 14:57:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.353 14:57:23 -- common/autotest_common.sh@10 -- # set +x 00:06:53.353 14:57:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.353 14:57:23 -- event/cpu_locks.sh@67 -- # no_locks 00:06:53.353 14:57:23 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:53.353 14:57:23 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:53.353 14:57:23 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:53.353 14:57:23 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:53.353 14:57:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.353 14:57:23 -- common/autotest_common.sh@10 -- # set +x 00:06:53.353 14:57:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.353 14:57:23 -- event/cpu_locks.sh@71 -- # locks_exist 67265 00:06:53.353 14:57:23 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.353 14:57:23 -- event/cpu_locks.sh@22 -- # lslocks -p 67265 00:06:53.612 14:57:24 -- event/cpu_locks.sh@73 -- # killprocess 67265 00:06:53.612 14:57:24 -- common/autotest_common.sh@936 -- # '[' -z 67265 ']' 00:06:53.612 14:57:24 -- common/autotest_common.sh@940 -- # kill -0 67265 00:06:53.612 14:57:24 -- common/autotest_common.sh@941 -- # uname 00:06:53.870 14:57:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:53.870 14:57:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67265 00:06:53.870 killing process with pid 67265 00:06:53.870 14:57:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:53.870 14:57:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:53.870 14:57:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67265' 00:06:53.870 14:57:24 -- common/autotest_common.sh@955 -- # kill 67265 00:06:53.870 14:57:24 -- common/autotest_common.sh@960 -- # wait 67265 00:06:54.128 00:06:54.128 real 0m1.910s 00:06:54.128 user 0m2.289s 00:06:54.128 sys 0m0.479s 00:06:54.128 14:57:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:54.128 14:57:24 -- common/autotest_common.sh@10 -- # set +x 00:06:54.128 ************************************ 00:06:54.128 END TEST default_locks_via_rpc 00:06:54.128 ************************************ 00:06:54.128 14:57:24 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:54.128 14:57:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:54.128 14:57:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.128 14:57:24 -- common/autotest_common.sh@10 -- # set +x 00:06:54.128 ************************************ 00:06:54.128 START TEST non_locking_app_on_locked_coremask 00:06:54.128 ************************************ 00:06:54.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.128 14:57:24 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:06:54.128 14:57:24 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=67316 00:06:54.128 14:57:24 -- event/cpu_locks.sh@81 -- # waitforlisten 67316 /var/tmp/spdk.sock 00:06:54.128 14:57:24 -- common/autotest_common.sh@829 -- # '[' -z 67316 ']' 00:06:54.128 14:57:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.128 14:57:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.128 14:57:24 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:54.128 14:57:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.128 14:57:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.128 14:57:24 -- common/autotest_common.sh@10 -- # set +x 00:06:54.128 [2024-11-20 14:57:24.782879] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:54.128 [2024-11-20 14:57:24.782977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67316 ] 00:06:54.128 [2024-11-20 14:57:24.916961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.386 [2024-11-20 14:57:24.951458] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:54.386 [2024-11-20 14:57:24.951628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.320 14:57:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.320 14:57:25 -- common/autotest_common.sh@862 -- # return 0 00:06:55.320 14:57:25 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=67332 00:06:55.320 14:57:25 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:55.320 14:57:25 -- event/cpu_locks.sh@85 -- # waitforlisten 67332 /var/tmp/spdk2.sock 00:06:55.320 14:57:25 -- common/autotest_common.sh@829 -- # '[' -z 67332 ']' 00:06:55.320 14:57:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.320 14:57:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.320 14:57:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.320 14:57:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.320 14:57:25 -- common/autotest_common.sh@10 -- # set +x 00:06:55.320 [2024-11-20 14:57:25.914350] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:55.320 [2024-11-20 14:57:25.914716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67332 ] 00:06:55.320 [2024-11-20 14:57:26.065275] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:55.320 [2024-11-20 14:57:26.065366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.579 [2024-11-20 14:57:26.131947] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:55.579 [2024-11-20 14:57:26.132149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.516 14:57:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.516 14:57:27 -- common/autotest_common.sh@862 -- # return 0 00:06:56.516 14:57:27 -- event/cpu_locks.sh@87 -- # locks_exist 67316 00:06:56.516 14:57:27 -- event/cpu_locks.sh@22 -- # lslocks -p 67316 00:06:56.516 14:57:27 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.449 14:57:27 -- event/cpu_locks.sh@89 -- # killprocess 67316 00:06:57.449 14:57:27 -- common/autotest_common.sh@936 -- # '[' -z 67316 ']' 00:06:57.449 14:57:27 -- common/autotest_common.sh@940 -- # kill -0 67316 00:06:57.449 14:57:27 -- common/autotest_common.sh@941 -- # uname 00:06:57.449 14:57:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:57.449 14:57:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67316 00:06:57.449 14:57:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:57.449 14:57:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:57.449 killing process with pid 67316 00:06:57.449 14:57:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67316' 00:06:57.449 14:57:27 -- common/autotest_common.sh@955 -- # kill 67316 00:06:57.449 14:57:27 -- common/autotest_common.sh@960 -- # wait 67316 00:06:57.707 14:57:28 -- event/cpu_locks.sh@90 -- # killprocess 67332 00:06:57.707 14:57:28 -- common/autotest_common.sh@936 -- # '[' -z 67332 ']' 00:06:57.707 14:57:28 -- common/autotest_common.sh@940 -- # kill -0 67332 00:06:57.707 14:57:28 -- common/autotest_common.sh@941 -- # uname 00:06:57.707 14:57:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:57.707 14:57:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67332 00:06:57.707 killing process with pid 67332 00:06:57.707 14:57:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:57.707 14:57:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:57.707 14:57:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67332' 00:06:57.707 14:57:28 -- common/autotest_common.sh@955 -- # kill 67332 00:06:57.707 14:57:28 -- common/autotest_common.sh@960 -- # wait 67332 00:06:57.968 ************************************ 00:06:57.968 END TEST non_locking_app_on_locked_coremask 00:06:57.968 ************************************ 00:06:57.968 00:06:57.968 real 0m3.899s 00:06:57.968 user 0m4.841s 00:06:57.968 sys 0m0.951s 00:06:57.968 14:57:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:57.968 14:57:28 -- common/autotest_common.sh@10 -- # set +x 00:06:57.968 14:57:28 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:57.968 14:57:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:57.968 14:57:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.968 14:57:28 -- common/autotest_common.sh@10 -- # set +x 00:06:57.968 ************************************ 00:06:57.968 START TEST locking_app_on_unlocked_coremask 00:06:57.968 ************************************ 00:06:57.968 14:57:28 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:06:57.968 14:57:28 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=67399 00:06:57.968 14:57:28 -- event/cpu_locks.sh@99 -- # waitforlisten 67399 /var/tmp/spdk.sock 00:06:57.968 14:57:28 -- common/autotest_common.sh@829 -- # '[' -z 67399 ']' 00:06:57.968 14:57:28 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:57.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.968 14:57:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.968 14:57:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:57.968 14:57:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.968 14:57:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:57.968 14:57:28 -- common/autotest_common.sh@10 -- # set +x 00:06:57.968 [2024-11-20 14:57:28.727538] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:57.968 [2024-11-20 14:57:28.727668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67399 ] 00:06:58.228 [2024-11-20 14:57:28.869280] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:58.228 [2024-11-20 14:57:28.869350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.228 [2024-11-20 14:57:28.908380] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:58.228 [2024-11-20 14:57:28.908565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.164 14:57:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:59.164 14:57:29 -- common/autotest_common.sh@862 -- # return 0 00:06:59.164 14:57:29 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=67415 00:06:59.164 14:57:29 -- event/cpu_locks.sh@103 -- # waitforlisten 67415 /var/tmp/spdk2.sock 00:06:59.164 14:57:29 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:59.164 14:57:29 -- common/autotest_common.sh@829 -- # '[' -z 67415 ']' 00:06:59.164 14:57:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.164 14:57:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.164 14:57:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.164 14:57:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.164 14:57:29 -- common/autotest_common.sh@10 -- # set +x 00:06:59.164 [2024-11-20 14:57:29.770422] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:59.164 [2024-11-20 14:57:29.770889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67415 ] 00:06:59.164 [2024-11-20 14:57:29.916700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.424 [2024-11-20 14:57:29.982796] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:59.424 [2024-11-20 14:57:29.982957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.358 14:57:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.358 14:57:30 -- common/autotest_common.sh@862 -- # return 0 00:07:00.358 14:57:30 -- event/cpu_locks.sh@105 -- # locks_exist 67415 00:07:00.358 14:57:30 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.358 14:57:30 -- event/cpu_locks.sh@22 -- # lslocks -p 67415 00:07:00.925 14:57:31 -- event/cpu_locks.sh@107 -- # killprocess 67399 00:07:00.925 14:57:31 -- common/autotest_common.sh@936 -- # '[' -z 67399 ']' 00:07:00.925 14:57:31 -- common/autotest_common.sh@940 -- # kill -0 67399 00:07:00.925 14:57:31 -- common/autotest_common.sh@941 -- # uname 00:07:00.925 14:57:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:00.925 14:57:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67399 00:07:00.925 14:57:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:00.925 killing process with pid 67399 00:07:00.925 14:57:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:00.925 14:57:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67399' 00:07:00.925 14:57:31 -- common/autotest_common.sh@955 -- # kill 67399 00:07:00.925 14:57:31 -- common/autotest_common.sh@960 -- # wait 67399 00:07:01.492 14:57:32 -- event/cpu_locks.sh@108 -- # killprocess 67415 00:07:01.492 14:57:32 -- common/autotest_common.sh@936 -- # '[' -z 67415 ']' 00:07:01.492 14:57:32 -- common/autotest_common.sh@940 -- # kill -0 67415 00:07:01.492 14:57:32 -- common/autotest_common.sh@941 -- # uname 00:07:01.492 14:57:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:01.492 14:57:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67415 00:07:01.492 killing process with pid 67415 00:07:01.492 14:57:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:01.492 14:57:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:01.492 14:57:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67415' 00:07:01.492 14:57:32 -- common/autotest_common.sh@955 -- # kill 67415 00:07:01.492 14:57:32 -- common/autotest_common.sh@960 -- # wait 67415 00:07:01.750 ************************************ 00:07:01.750 END TEST locking_app_on_unlocked_coremask 00:07:01.750 ************************************ 00:07:01.750 00:07:01.750 real 0m3.737s 00:07:01.750 user 0m4.509s 00:07:01.750 sys 0m0.944s 00:07:01.750 14:57:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:01.750 14:57:32 -- common/autotest_common.sh@10 -- # set +x 00:07:01.750 14:57:32 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:01.750 14:57:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:01.750 14:57:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.750 14:57:32 -- common/autotest_common.sh@10 -- # set +x 00:07:01.750 ************************************ 00:07:01.750 START TEST locking_app_on_locked_coremask 00:07:01.750 ************************************ 00:07:01.750 14:57:32 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:07:01.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.750 14:57:32 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=67471 00:07:01.750 14:57:32 -- event/cpu_locks.sh@116 -- # waitforlisten 67471 /var/tmp/spdk.sock 00:07:01.750 14:57:32 -- common/autotest_common.sh@829 -- # '[' -z 67471 ']' 00:07:01.750 14:57:32 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.750 14:57:32 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:01.750 14:57:32 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.750 14:57:32 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:01.750 14:57:32 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:01.750 14:57:32 -- common/autotest_common.sh@10 -- # set +x 00:07:01.750 [2024-11-20 14:57:32.505613] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:01.750 [2024-11-20 14:57:32.505727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67471 ] 00:07:02.008 [2024-11-20 14:57:32.639933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.008 [2024-11-20 14:57:32.679369] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:02.008 [2024-11-20 14:57:32.679595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.008 14:57:33 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.008 14:57:33 -- common/autotest_common.sh@862 -- # return 0 00:07:03.008 14:57:33 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=67487 00:07:03.008 14:57:33 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 67487 /var/tmp/spdk2.sock 00:07:03.008 14:57:33 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:03.008 14:57:33 -- common/autotest_common.sh@650 -- # local es=0 00:07:03.008 14:57:33 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67487 /var/tmp/spdk2.sock 00:07:03.008 14:57:33 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:03.008 14:57:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.008 14:57:33 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:03.008 14:57:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:03.008 14:57:33 -- common/autotest_common.sh@653 -- # waitforlisten 67487 /var/tmp/spdk2.sock 00:07:03.008 14:57:33 -- common/autotest_common.sh@829 -- # '[' -z 67487 ']' 00:07:03.008 14:57:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.008 14:57:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.008 14:57:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.008 14:57:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.008 14:57:33 -- common/autotest_common.sh@10 -- # set +x 00:07:03.008 [2024-11-20 14:57:33.640121] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:03.008 [2024-11-20 14:57:33.640478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67487 ] 00:07:03.298 [2024-11-20 14:57:33.787239] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 67471 has claimed it. 00:07:03.298 [2024-11-20 14:57:33.787329] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:03.866 ERROR: process (pid: 67487) is no longer running 00:07:03.866 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67487) - No such process 00:07:03.866 14:57:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.866 14:57:34 -- common/autotest_common.sh@862 -- # return 1 00:07:03.866 14:57:34 -- common/autotest_common.sh@653 -- # es=1 00:07:03.866 14:57:34 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:03.866 14:57:34 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:03.866 14:57:34 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:03.866 14:57:34 -- event/cpu_locks.sh@122 -- # locks_exist 67471 00:07:03.866 14:57:34 -- event/cpu_locks.sh@22 -- # lslocks -p 67471 00:07:03.866 14:57:34 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:04.125 14:57:34 -- event/cpu_locks.sh@124 -- # killprocess 67471 00:07:04.125 14:57:34 -- common/autotest_common.sh@936 -- # '[' -z 67471 ']' 00:07:04.125 14:57:34 -- common/autotest_common.sh@940 -- # kill -0 67471 00:07:04.125 14:57:34 -- common/autotest_common.sh@941 -- # uname 00:07:04.125 14:57:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:04.125 14:57:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67471 00:07:04.125 killing process with pid 67471 00:07:04.125 14:57:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:04.125 14:57:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:04.125 14:57:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67471' 00:07:04.125 14:57:34 -- common/autotest_common.sh@955 -- # kill 67471 00:07:04.125 14:57:34 -- common/autotest_common.sh@960 -- # wait 67471 00:07:04.383 00:07:04.383 real 0m2.629s 00:07:04.384 user 0m3.228s 00:07:04.384 sys 0m0.584s 00:07:04.384 14:57:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:04.384 14:57:35 -- common/autotest_common.sh@10 -- # set +x 00:07:04.384 ************************************ 00:07:04.384 END TEST locking_app_on_locked_coremask 00:07:04.384 ************************************ 00:07:04.384 14:57:35 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:04.384 14:57:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:04.384 14:57:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.384 14:57:35 -- common/autotest_common.sh@10 -- # set +x 00:07:04.384 ************************************ 00:07:04.384 START TEST locking_overlapped_coremask 00:07:04.384 ************************************ 00:07:04.384 14:57:35 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:07:04.384 14:57:35 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=67538 00:07:04.384 14:57:35 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:04.384 14:57:35 -- event/cpu_locks.sh@133 -- # waitforlisten 67538 /var/tmp/spdk.sock 00:07:04.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.384 14:57:35 -- common/autotest_common.sh@829 -- # '[' -z 67538 ']' 00:07:04.384 14:57:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.384 14:57:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.384 14:57:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.384 14:57:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.384 14:57:35 -- common/autotest_common.sh@10 -- # set +x 00:07:04.384 [2024-11-20 14:57:35.179094] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:04.384 [2024-11-20 14:57:35.179195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67538 ] 00:07:04.643 [2024-11-20 14:57:35.313579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:04.643 [2024-11-20 14:57:35.349137] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:04.643 [2024-11-20 14:57:35.349602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.643 [2024-11-20 14:57:35.349674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.643 [2024-11-20 14:57:35.349677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.579 14:57:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.579 14:57:36 -- common/autotest_common.sh@862 -- # return 0 00:07:05.579 14:57:36 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=67556 00:07:05.579 14:57:36 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:05.579 14:57:36 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 67556 /var/tmp/spdk2.sock 00:07:05.579 14:57:36 -- common/autotest_common.sh@650 -- # local es=0 00:07:05.579 14:57:36 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 67556 /var/tmp/spdk2.sock 00:07:05.579 14:57:36 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:05.579 14:57:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.579 14:57:36 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:05.579 14:57:36 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.579 14:57:36 -- common/autotest_common.sh@653 -- # waitforlisten 67556 /var/tmp/spdk2.sock 00:07:05.579 14:57:36 -- common/autotest_common.sh@829 -- # '[' -z 67556 ']' 00:07:05.579 14:57:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.579 14:57:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.579 14:57:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.579 14:57:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.579 14:57:36 -- common/autotest_common.sh@10 -- # set +x 00:07:05.579 [2024-11-20 14:57:36.306276] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:05.579 [2024-11-20 14:57:36.306604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67556 ] 00:07:05.838 [2024-11-20 14:57:36.450412] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67538 has claimed it. 00:07:05.838 [2024-11-20 14:57:36.450493] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:06.406 ERROR: process (pid: 67556) is no longer running 00:07:06.406 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (67556) - No such process 00:07:06.406 14:57:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.406 14:57:37 -- common/autotest_common.sh@862 -- # return 1 00:07:06.406 14:57:37 -- common/autotest_common.sh@653 -- # es=1 00:07:06.406 14:57:37 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:06.406 14:57:37 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:06.406 14:57:37 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:06.406 14:57:37 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:06.406 14:57:37 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:06.406 14:57:37 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:06.406 14:57:37 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:06.406 14:57:37 -- event/cpu_locks.sh@141 -- # killprocess 67538 00:07:06.406 14:57:37 -- common/autotest_common.sh@936 -- # '[' -z 67538 ']' 00:07:06.406 14:57:37 -- common/autotest_common.sh@940 -- # kill -0 67538 00:07:06.406 14:57:37 -- common/autotest_common.sh@941 -- # uname 00:07:06.406 14:57:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:06.406 14:57:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67538 00:07:06.406 14:57:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:06.406 14:57:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:06.406 14:57:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67538' 00:07:06.406 killing process with pid 67538 00:07:06.406 14:57:37 -- common/autotest_common.sh@955 -- # kill 67538 00:07:06.406 14:57:37 -- common/autotest_common.sh@960 -- # wait 67538 00:07:06.665 00:07:06.665 real 0m2.222s 00:07:06.665 user 0m6.614s 00:07:06.665 sys 0m0.352s 00:07:06.665 14:57:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:06.665 14:57:37 -- common/autotest_common.sh@10 -- # set +x 00:07:06.665 ************************************ 00:07:06.665 END TEST locking_overlapped_coremask 00:07:06.665 ************************************ 00:07:06.665 14:57:37 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:06.665 14:57:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:06.665 14:57:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.665 14:57:37 -- common/autotest_common.sh@10 -- # set +x 00:07:06.665 ************************************ 00:07:06.665 START TEST locking_overlapped_coremask_via_rpc 00:07:06.665 ************************************ 00:07:06.665 14:57:37 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:07:06.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.665 14:57:37 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=67596 00:07:06.665 14:57:37 -- event/cpu_locks.sh@149 -- # waitforlisten 67596 /var/tmp/spdk.sock 00:07:06.665 14:57:37 -- common/autotest_common.sh@829 -- # '[' -z 67596 ']' 00:07:06.665 14:57:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.665 14:57:37 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:06.665 14:57:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.665 14:57:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.665 14:57:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.665 14:57:37 -- common/autotest_common.sh@10 -- # set +x 00:07:06.665 [2024-11-20 14:57:37.453895] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:06.665 [2024-11-20 14:57:37.453992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67596 ] 00:07:06.923 [2024-11-20 14:57:37.588667] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:06.923 [2024-11-20 14:57:37.588970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.923 [2024-11-20 14:57:37.626865] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:06.923 [2024-11-20 14:57:37.627424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.923 [2024-11-20 14:57:37.627510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.923 [2024-11-20 14:57:37.627515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.858 14:57:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.858 14:57:38 -- common/autotest_common.sh@862 -- # return 0 00:07:07.858 14:57:38 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=67614 00:07:07.858 14:57:38 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:07.858 14:57:38 -- event/cpu_locks.sh@153 -- # waitforlisten 67614 /var/tmp/spdk2.sock 00:07:07.858 14:57:38 -- common/autotest_common.sh@829 -- # '[' -z 67614 ']' 00:07:07.858 14:57:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.858 14:57:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.858 14:57:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.858 14:57:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.858 14:57:38 -- common/autotest_common.sh@10 -- # set +x 00:07:07.858 [2024-11-20 14:57:38.560679] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:07.858 [2024-11-20 14:57:38.561057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67614 ] 00:07:08.117 [2024-11-20 14:57:38.709598] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:08.117 [2024-11-20 14:57:38.709708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.117 [2024-11-20 14:57:38.786060] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:08.117 [2024-11-20 14:57:38.786400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.117 [2024-11-20 14:57:38.786722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:08.117 [2024-11-20 14:57:38.786724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.052 14:57:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.052 14:57:39 -- common/autotest_common.sh@862 -- # return 0 00:07:09.052 14:57:39 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:09.052 14:57:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.052 14:57:39 -- common/autotest_common.sh@10 -- # set +x 00:07:09.052 14:57:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.052 14:57:39 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:09.052 14:57:39 -- common/autotest_common.sh@650 -- # local es=0 00:07:09.052 14:57:39 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:09.052 14:57:39 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:09.052 14:57:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.052 14:57:39 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:09.052 14:57:39 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.052 14:57:39 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:09.052 14:57:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.052 14:57:39 -- common/autotest_common.sh@10 -- # set +x 00:07:09.052 [2024-11-20 14:57:39.645857] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67596 has claimed it. 00:07:09.052 request: 00:07:09.052 { 00:07:09.052 "method": "framework_enable_cpumask_locks", 00:07:09.052 "req_id": 1 00:07:09.052 } 00:07:09.052 Got JSON-RPC error response 00:07:09.052 response: 00:07:09.052 { 00:07:09.052 "code": -32603, 00:07:09.052 "message": "Failed to claim CPU core: 2" 00:07:09.052 } 00:07:09.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.053 14:57:39 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:09.053 14:57:39 -- common/autotest_common.sh@653 -- # es=1 00:07:09.053 14:57:39 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:09.053 14:57:39 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:09.053 14:57:39 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:09.053 14:57:39 -- event/cpu_locks.sh@158 -- # waitforlisten 67596 /var/tmp/spdk.sock 00:07:09.053 14:57:39 -- common/autotest_common.sh@829 -- # '[' -z 67596 ']' 00:07:09.053 14:57:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.053 14:57:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.053 14:57:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.053 14:57:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.053 14:57:39 -- common/autotest_common.sh@10 -- # set +x 00:07:09.311 14:57:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.311 14:57:39 -- common/autotest_common.sh@862 -- # return 0 00:07:09.311 14:57:39 -- event/cpu_locks.sh@159 -- # waitforlisten 67614 /var/tmp/spdk2.sock 00:07:09.311 14:57:39 -- common/autotest_common.sh@829 -- # '[' -z 67614 ']' 00:07:09.311 14:57:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.311 14:57:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.311 14:57:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.311 14:57:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.311 14:57:39 -- common/autotest_common.sh@10 -- # set +x 00:07:09.570 14:57:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.570 14:57:40 -- common/autotest_common.sh@862 -- # return 0 00:07:09.570 14:57:40 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:09.571 14:57:40 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:09.571 ************************************ 00:07:09.571 END TEST locking_overlapped_coremask_via_rpc 00:07:09.571 ************************************ 00:07:09.571 14:57:40 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:09.571 14:57:40 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:09.571 00:07:09.571 real 0m2.890s 00:07:09.571 user 0m1.602s 00:07:09.571 sys 0m0.200s 00:07:09.571 14:57:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:09.571 14:57:40 -- common/autotest_common.sh@10 -- # set +x 00:07:09.571 14:57:40 -- event/cpu_locks.sh@174 -- # cleanup 00:07:09.571 14:57:40 -- event/cpu_locks.sh@15 -- # [[ -z 67596 ]] 00:07:09.571 14:57:40 -- event/cpu_locks.sh@15 -- # killprocess 67596 00:07:09.571 14:57:40 -- common/autotest_common.sh@936 -- # '[' -z 67596 ']' 00:07:09.571 14:57:40 -- common/autotest_common.sh@940 -- # kill -0 67596 00:07:09.571 14:57:40 -- common/autotest_common.sh@941 -- # uname 00:07:09.571 14:57:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:09.571 14:57:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67596 00:07:09.571 14:57:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:09.571 killing process with pid 67596 00:07:09.571 14:57:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:09.571 14:57:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67596' 00:07:09.571 14:57:40 -- common/autotest_common.sh@955 -- # kill 67596 00:07:09.571 14:57:40 -- common/autotest_common.sh@960 -- # wait 67596 00:07:09.829 14:57:40 -- event/cpu_locks.sh@16 -- # [[ -z 67614 ]] 00:07:09.829 14:57:40 -- event/cpu_locks.sh@16 -- # killprocess 67614 00:07:09.829 14:57:40 -- common/autotest_common.sh@936 -- # '[' -z 67614 ']' 00:07:09.829 14:57:40 -- common/autotest_common.sh@940 -- # kill -0 67614 00:07:09.829 14:57:40 -- common/autotest_common.sh@941 -- # uname 00:07:09.829 14:57:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:09.829 14:57:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67614 00:07:10.086 killing process with pid 67614 00:07:10.086 14:57:40 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:07:10.086 14:57:40 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:07:10.087 14:57:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67614' 00:07:10.087 14:57:40 -- common/autotest_common.sh@955 -- # kill 67614 00:07:10.087 14:57:40 -- common/autotest_common.sh@960 -- # wait 67614 00:07:10.087 14:57:40 -- event/cpu_locks.sh@18 -- # rm -f 00:07:10.087 14:57:40 -- event/cpu_locks.sh@1 -- # cleanup 00:07:10.087 14:57:40 -- event/cpu_locks.sh@15 -- # [[ -z 67596 ]] 00:07:10.087 14:57:40 -- event/cpu_locks.sh@15 -- # killprocess 67596 00:07:10.087 14:57:40 -- common/autotest_common.sh@936 -- # '[' -z 67596 ']' 00:07:10.087 14:57:40 -- common/autotest_common.sh@940 -- # kill -0 67596 00:07:10.087 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (67596) - No such process 00:07:10.087 Process with pid 67596 is not found 00:07:10.087 14:57:40 -- common/autotest_common.sh@963 -- # echo 'Process with pid 67596 is not found' 00:07:10.087 14:57:40 -- event/cpu_locks.sh@16 -- # [[ -z 67614 ]] 00:07:10.087 Process with pid 67614 is not found 00:07:10.087 14:57:40 -- event/cpu_locks.sh@16 -- # killprocess 67614 00:07:10.087 14:57:40 -- common/autotest_common.sh@936 -- # '[' -z 67614 ']' 00:07:10.087 14:57:40 -- common/autotest_common.sh@940 -- # kill -0 67614 00:07:10.087 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (67614) - No such process 00:07:10.087 14:57:40 -- common/autotest_common.sh@963 -- # echo 'Process with pid 67614 is not found' 00:07:10.087 14:57:40 -- event/cpu_locks.sh@18 -- # rm -f 00:07:10.087 00:07:10.087 real 0m20.136s 00:07:10.087 user 0m38.236s 00:07:10.087 sys 0m4.645s 00:07:10.087 14:57:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.087 14:57:40 -- common/autotest_common.sh@10 -- # set +x 00:07:10.087 ************************************ 00:07:10.087 END TEST cpu_locks 00:07:10.087 ************************************ 00:07:10.346 ************************************ 00:07:10.346 END TEST event 00:07:10.346 ************************************ 00:07:10.346 00:07:10.346 real 0m47.755s 00:07:10.346 user 1m35.627s 00:07:10.346 sys 0m8.095s 00:07:10.346 14:57:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.346 14:57:40 -- common/autotest_common.sh@10 -- # set +x 00:07:10.346 14:57:40 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:10.346 14:57:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:10.346 14:57:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.346 14:57:40 -- common/autotest_common.sh@10 -- # set +x 00:07:10.346 ************************************ 00:07:10.346 START TEST thread 00:07:10.346 ************************************ 00:07:10.346 14:57:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:10.346 * Looking for test storage... 00:07:10.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:10.346 14:57:41 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:10.346 14:57:41 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:10.346 14:57:41 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:10.346 14:57:41 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:10.346 14:57:41 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:10.346 14:57:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:10.346 14:57:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:10.346 14:57:41 -- scripts/common.sh@335 -- # IFS=.-: 00:07:10.346 14:57:41 -- scripts/common.sh@335 -- # read -ra ver1 00:07:10.346 14:57:41 -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.346 14:57:41 -- scripts/common.sh@336 -- # read -ra ver2 00:07:10.346 14:57:41 -- scripts/common.sh@337 -- # local 'op=<' 00:07:10.346 14:57:41 -- scripts/common.sh@339 -- # ver1_l=2 00:07:10.346 14:57:41 -- scripts/common.sh@340 -- # ver2_l=1 00:07:10.346 14:57:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:10.346 14:57:41 -- scripts/common.sh@343 -- # case "$op" in 00:07:10.346 14:57:41 -- scripts/common.sh@344 -- # : 1 00:07:10.346 14:57:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:10.346 14:57:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.346 14:57:41 -- scripts/common.sh@364 -- # decimal 1 00:07:10.346 14:57:41 -- scripts/common.sh@352 -- # local d=1 00:07:10.346 14:57:41 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.346 14:57:41 -- scripts/common.sh@354 -- # echo 1 00:07:10.346 14:57:41 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:10.346 14:57:41 -- scripts/common.sh@365 -- # decimal 2 00:07:10.346 14:57:41 -- scripts/common.sh@352 -- # local d=2 00:07:10.346 14:57:41 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.346 14:57:41 -- scripts/common.sh@354 -- # echo 2 00:07:10.346 14:57:41 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:10.346 14:57:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:10.346 14:57:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:10.346 14:57:41 -- scripts/common.sh@367 -- # return 0 00:07:10.346 14:57:41 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.346 14:57:41 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:10.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.346 --rc genhtml_branch_coverage=1 00:07:10.346 --rc genhtml_function_coverage=1 00:07:10.346 --rc genhtml_legend=1 00:07:10.346 --rc geninfo_all_blocks=1 00:07:10.346 --rc geninfo_unexecuted_blocks=1 00:07:10.346 00:07:10.346 ' 00:07:10.346 14:57:41 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:10.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.346 --rc genhtml_branch_coverage=1 00:07:10.346 --rc genhtml_function_coverage=1 00:07:10.346 --rc genhtml_legend=1 00:07:10.346 --rc geninfo_all_blocks=1 00:07:10.346 --rc geninfo_unexecuted_blocks=1 00:07:10.346 00:07:10.346 ' 00:07:10.346 14:57:41 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:10.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.346 --rc genhtml_branch_coverage=1 00:07:10.346 --rc genhtml_function_coverage=1 00:07:10.346 --rc genhtml_legend=1 00:07:10.346 --rc geninfo_all_blocks=1 00:07:10.346 --rc geninfo_unexecuted_blocks=1 00:07:10.346 00:07:10.346 ' 00:07:10.346 14:57:41 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:10.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.346 --rc genhtml_branch_coverage=1 00:07:10.346 --rc genhtml_function_coverage=1 00:07:10.346 --rc genhtml_legend=1 00:07:10.346 --rc geninfo_all_blocks=1 00:07:10.346 --rc geninfo_unexecuted_blocks=1 00:07:10.346 00:07:10.346 ' 00:07:10.346 14:57:41 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:10.346 14:57:41 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:10.346 14:57:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.346 14:57:41 -- common/autotest_common.sh@10 -- # set +x 00:07:10.346 ************************************ 00:07:10.346 START TEST thread_poller_perf 00:07:10.346 ************************************ 00:07:10.346 14:57:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:10.604 [2024-11-20 14:57:41.152398] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:10.604 [2024-11-20 14:57:41.152499] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67749 ] 00:07:10.604 [2024-11-20 14:57:41.290113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.604 [2024-11-20 14:57:41.329071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.604 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:11.979 [2024-11-20T14:57:42.783Z] ====================================== 00:07:11.979 [2024-11-20T14:57:42.783Z] busy:2211305341 (cyc) 00:07:11.979 [2024-11-20T14:57:42.783Z] total_run_count: 268000 00:07:11.979 [2024-11-20T14:57:42.783Z] tsc_hz: 2200000000 (cyc) 00:07:11.979 [2024-11-20T14:57:42.783Z] ====================================== 00:07:11.979 [2024-11-20T14:57:42.783Z] poller_cost: 8251 (cyc), 3750 (nsec) 00:07:11.979 00:07:11.979 real 0m1.258s 00:07:11.979 user 0m1.100s 00:07:11.979 sys 0m0.048s 00:07:11.979 14:57:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.979 14:57:42 -- common/autotest_common.sh@10 -- # set +x 00:07:11.979 ************************************ 00:07:11.979 END TEST thread_poller_perf 00:07:11.979 ************************************ 00:07:11.979 14:57:42 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:11.979 14:57:42 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:11.979 14:57:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.979 14:57:42 -- common/autotest_common.sh@10 -- # set +x 00:07:11.979 ************************************ 00:07:11.979 START TEST thread_poller_perf 00:07:11.979 ************************************ 00:07:11.979 14:57:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:11.979 [2024-11-20 14:57:42.458827] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:11.979 [2024-11-20 14:57:42.458918] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67779 ] 00:07:11.979 [2024-11-20 14:57:42.592804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.979 [2024-11-20 14:57:42.629813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.979 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:12.914 [2024-11-20T14:57:43.718Z] ====================================== 00:07:12.914 [2024-11-20T14:57:43.718Z] busy:2203112105 (cyc) 00:07:12.914 [2024-11-20T14:57:43.718Z] total_run_count: 3997000 00:07:12.914 [2024-11-20T14:57:43.718Z] tsc_hz: 2200000000 (cyc) 00:07:12.914 [2024-11-20T14:57:43.718Z] ====================================== 00:07:12.914 [2024-11-20T14:57:43.718Z] poller_cost: 551 (cyc), 250 (nsec) 00:07:12.914 00:07:12.914 real 0m1.247s 00:07:12.914 user 0m1.096s 00:07:12.914 sys 0m0.042s 00:07:12.914 14:57:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.914 14:57:43 -- common/autotest_common.sh@10 -- # set +x 00:07:12.914 ************************************ 00:07:12.914 END TEST thread_poller_perf 00:07:12.914 ************************************ 00:07:13.172 14:57:43 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:13.172 00:07:13.172 real 0m2.765s 00:07:13.172 user 0m2.336s 00:07:13.172 sys 0m0.213s 00:07:13.172 14:57:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:13.172 14:57:43 -- common/autotest_common.sh@10 -- # set +x 00:07:13.172 ************************************ 00:07:13.172 END TEST thread 00:07:13.172 ************************************ 00:07:13.172 14:57:43 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:13.172 14:57:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:13.172 14:57:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:13.172 14:57:43 -- common/autotest_common.sh@10 -- # set +x 00:07:13.172 ************************************ 00:07:13.172 START TEST accel 00:07:13.172 ************************************ 00:07:13.172 14:57:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:13.172 * Looking for test storage... 00:07:13.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:13.172 14:57:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:13.172 14:57:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:13.172 14:57:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:13.172 14:57:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:13.172 14:57:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:13.172 14:57:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:13.172 14:57:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:13.172 14:57:43 -- scripts/common.sh@335 -- # IFS=.-: 00:07:13.172 14:57:43 -- scripts/common.sh@335 -- # read -ra ver1 00:07:13.172 14:57:43 -- scripts/common.sh@336 -- # IFS=.-: 00:07:13.172 14:57:43 -- scripts/common.sh@336 -- # read -ra ver2 00:07:13.172 14:57:43 -- scripts/common.sh@337 -- # local 'op=<' 00:07:13.172 14:57:43 -- scripts/common.sh@339 -- # ver1_l=2 00:07:13.172 14:57:43 -- scripts/common.sh@340 -- # ver2_l=1 00:07:13.172 14:57:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:13.172 14:57:43 -- scripts/common.sh@343 -- # case "$op" in 00:07:13.172 14:57:43 -- scripts/common.sh@344 -- # : 1 00:07:13.172 14:57:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:13.172 14:57:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:13.172 14:57:43 -- scripts/common.sh@364 -- # decimal 1 00:07:13.172 14:57:43 -- scripts/common.sh@352 -- # local d=1 00:07:13.172 14:57:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:13.172 14:57:43 -- scripts/common.sh@354 -- # echo 1 00:07:13.172 14:57:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:13.172 14:57:43 -- scripts/common.sh@365 -- # decimal 2 00:07:13.172 14:57:43 -- scripts/common.sh@352 -- # local d=2 00:07:13.172 14:57:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:13.172 14:57:43 -- scripts/common.sh@354 -- # echo 2 00:07:13.172 14:57:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:13.172 14:57:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:13.172 14:57:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:13.172 14:57:43 -- scripts/common.sh@367 -- # return 0 00:07:13.172 14:57:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:13.172 14:57:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:13.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.172 --rc genhtml_branch_coverage=1 00:07:13.172 --rc genhtml_function_coverage=1 00:07:13.172 --rc genhtml_legend=1 00:07:13.172 --rc geninfo_all_blocks=1 00:07:13.172 --rc geninfo_unexecuted_blocks=1 00:07:13.172 00:07:13.172 ' 00:07:13.172 14:57:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:13.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.172 --rc genhtml_branch_coverage=1 00:07:13.172 --rc genhtml_function_coverage=1 00:07:13.172 --rc genhtml_legend=1 00:07:13.172 --rc geninfo_all_blocks=1 00:07:13.172 --rc geninfo_unexecuted_blocks=1 00:07:13.172 00:07:13.172 ' 00:07:13.172 14:57:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:13.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.172 --rc genhtml_branch_coverage=1 00:07:13.172 --rc genhtml_function_coverage=1 00:07:13.172 --rc genhtml_legend=1 00:07:13.172 --rc geninfo_all_blocks=1 00:07:13.172 --rc geninfo_unexecuted_blocks=1 00:07:13.172 00:07:13.172 ' 00:07:13.172 14:57:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:13.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:13.172 --rc genhtml_branch_coverage=1 00:07:13.172 --rc genhtml_function_coverage=1 00:07:13.172 --rc genhtml_legend=1 00:07:13.172 --rc geninfo_all_blocks=1 00:07:13.172 --rc geninfo_unexecuted_blocks=1 00:07:13.172 00:07:13.172 ' 00:07:13.172 14:57:43 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:07:13.172 14:57:43 -- accel/accel.sh@74 -- # get_expected_opcs 00:07:13.172 14:57:43 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:13.172 14:57:43 -- accel/accel.sh@59 -- # spdk_tgt_pid=67861 00:07:13.172 14:57:43 -- accel/accel.sh@60 -- # waitforlisten 67861 00:07:13.172 14:57:43 -- accel/accel.sh@58 -- # build_accel_config 00:07:13.172 14:57:43 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:13.172 14:57:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.172 14:57:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.172 14:57:43 -- common/autotest_common.sh@829 -- # '[' -z 67861 ']' 00:07:13.172 14:57:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.172 14:57:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.172 14:57:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.172 14:57:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.173 14:57:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.173 14:57:43 -- accel/accel.sh@42 -- # jq -r . 00:07:13.173 14:57:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.173 14:57:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.173 14:57:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.173 14:57:43 -- common/autotest_common.sh@10 -- # set +x 00:07:13.434 [2024-11-20 14:57:44.005625] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:13.434 [2024-11-20 14:57:44.005750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67861 ] 00:07:13.434 [2024-11-20 14:57:44.143049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.434 [2024-11-20 14:57:44.183391] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:13.434 [2024-11-20 14:57:44.183579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.393 14:57:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.393 14:57:45 -- common/autotest_common.sh@862 -- # return 0 00:07:14.393 14:57:45 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:14.393 14:57:45 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:07:14.393 14:57:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.393 14:57:45 -- common/autotest_common.sh@10 -- # set +x 00:07:14.393 14:57:45 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:14.393 14:57:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.393 14:57:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # IFS== 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.393 14:57:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.393 14:57:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # IFS== 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.393 14:57:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.393 14:57:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # IFS== 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.393 14:57:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.393 14:57:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # IFS== 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.393 14:57:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.393 14:57:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # IFS== 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.393 14:57:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.393 14:57:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # IFS== 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.393 14:57:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.393 14:57:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # IFS== 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.393 14:57:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.393 14:57:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # IFS== 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.393 14:57:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.393 14:57:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # IFS== 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.393 14:57:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.393 14:57:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # IFS== 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.393 14:57:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.393 14:57:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # IFS== 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.393 14:57:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.393 14:57:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # IFS== 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.393 14:57:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.393 14:57:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # IFS== 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.393 14:57:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.393 14:57:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # IFS== 00:07:14.393 14:57:45 -- accel/accel.sh@64 -- # read -r opc module 00:07:14.393 14:57:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:14.393 14:57:45 -- accel/accel.sh@67 -- # killprocess 67861 00:07:14.393 14:57:45 -- common/autotest_common.sh@936 -- # '[' -z 67861 ']' 00:07:14.393 14:57:45 -- common/autotest_common.sh@940 -- # kill -0 67861 00:07:14.393 14:57:45 -- common/autotest_common.sh@941 -- # uname 00:07:14.393 14:57:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:14.393 14:57:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67861 00:07:14.393 killing process with pid 67861 00:07:14.393 14:57:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:14.393 14:57:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:14.393 14:57:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67861' 00:07:14.393 14:57:45 -- common/autotest_common.sh@955 -- # kill 67861 00:07:14.393 14:57:45 -- common/autotest_common.sh@960 -- # wait 67861 00:07:14.653 14:57:45 -- accel/accel.sh@68 -- # trap - ERR 00:07:14.653 14:57:45 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:07:14.653 14:57:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:14.653 14:57:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.653 14:57:45 -- common/autotest_common.sh@10 -- # set +x 00:07:14.653 14:57:45 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:07:14.653 14:57:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:14.653 14:57:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.653 14:57:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.653 14:57:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.653 14:57:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.653 14:57:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.653 14:57:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.653 14:57:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.653 14:57:45 -- accel/accel.sh@42 -- # jq -r . 00:07:14.653 14:57:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.653 14:57:45 -- common/autotest_common.sh@10 -- # set +x 00:07:14.912 14:57:45 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:14.912 14:57:45 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:14.912 14:57:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.912 14:57:45 -- common/autotest_common.sh@10 -- # set +x 00:07:14.912 ************************************ 00:07:14.912 START TEST accel_missing_filename 00:07:14.912 ************************************ 00:07:14.912 14:57:45 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:07:14.912 14:57:45 -- common/autotest_common.sh@650 -- # local es=0 00:07:14.912 14:57:45 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:14.912 14:57:45 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:14.912 14:57:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.912 14:57:45 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:14.912 14:57:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.912 14:57:45 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:07:14.912 14:57:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:14.912 14:57:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.912 14:57:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.912 14:57:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.912 14:57:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.912 14:57:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.912 14:57:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.912 14:57:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.912 14:57:45 -- accel/accel.sh@42 -- # jq -r . 00:07:14.912 [2024-11-20 14:57:45.504596] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:14.912 [2024-11-20 14:57:45.504769] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67914 ] 00:07:14.912 [2024-11-20 14:57:45.638760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.912 [2024-11-20 14:57:45.682191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.171 [2024-11-20 14:57:45.719057] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:15.171 [2024-11-20 14:57:45.762868] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:15.171 A filename is required. 00:07:15.171 14:57:45 -- common/autotest_common.sh@653 -- # es=234 00:07:15.171 14:57:45 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.171 14:57:45 -- common/autotest_common.sh@662 -- # es=106 00:07:15.171 14:57:45 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:15.171 14:57:45 -- common/autotest_common.sh@670 -- # es=1 00:07:15.171 14:57:45 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.171 00:07:15.171 real 0m0.346s 00:07:15.172 user 0m0.200s 00:07:15.172 sys 0m0.083s 00:07:15.172 14:57:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.172 14:57:45 -- common/autotest_common.sh@10 -- # set +x 00:07:15.172 ************************************ 00:07:15.172 END TEST accel_missing_filename 00:07:15.172 ************************************ 00:07:15.172 14:57:45 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:15.172 14:57:45 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:15.172 14:57:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.172 14:57:45 -- common/autotest_common.sh@10 -- # set +x 00:07:15.172 ************************************ 00:07:15.172 START TEST accel_compress_verify 00:07:15.172 ************************************ 00:07:15.172 14:57:45 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:15.172 14:57:45 -- common/autotest_common.sh@650 -- # local es=0 00:07:15.172 14:57:45 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:15.172 14:57:45 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:15.172 14:57:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.172 14:57:45 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:15.172 14:57:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.172 14:57:45 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:15.172 14:57:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:15.172 14:57:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.172 14:57:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.172 14:57:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.172 14:57:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.172 14:57:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.172 14:57:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.172 14:57:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.172 14:57:45 -- accel/accel.sh@42 -- # jq -r . 00:07:15.172 [2024-11-20 14:57:45.890417] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:15.172 [2024-11-20 14:57:45.890502] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67939 ] 00:07:15.430 [2024-11-20 14:57:46.021787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.430 [2024-11-20 14:57:46.057230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.430 [2024-11-20 14:57:46.087739] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:15.430 [2024-11-20 14:57:46.128772] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:15.430 00:07:15.430 Compression does not support the verify option, aborting. 00:07:15.430 14:57:46 -- common/autotest_common.sh@653 -- # es=161 00:07:15.430 14:57:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.430 14:57:46 -- common/autotest_common.sh@662 -- # es=33 00:07:15.430 14:57:46 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:15.430 14:57:46 -- common/autotest_common.sh@670 -- # es=1 00:07:15.430 14:57:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.430 00:07:15.430 real 0m0.317s 00:07:15.430 user 0m0.190s 00:07:15.430 sys 0m0.070s 00:07:15.430 14:57:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.430 14:57:46 -- common/autotest_common.sh@10 -- # set +x 00:07:15.430 ************************************ 00:07:15.430 END TEST accel_compress_verify 00:07:15.430 ************************************ 00:07:15.430 14:57:46 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:15.430 14:57:46 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:15.430 14:57:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.430 14:57:46 -- common/autotest_common.sh@10 -- # set +x 00:07:15.691 ************************************ 00:07:15.691 START TEST accel_wrong_workload 00:07:15.691 ************************************ 00:07:15.691 14:57:46 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:07:15.691 14:57:46 -- common/autotest_common.sh@650 -- # local es=0 00:07:15.691 14:57:46 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:15.691 14:57:46 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:15.691 14:57:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.691 14:57:46 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:15.691 14:57:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.691 14:57:46 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:07:15.691 14:57:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:15.691 14:57:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.691 14:57:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.691 14:57:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.691 14:57:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.691 14:57:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.691 14:57:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.691 14:57:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.691 14:57:46 -- accel/accel.sh@42 -- # jq -r . 00:07:15.691 Unsupported workload type: foobar 00:07:15.691 [2024-11-20 14:57:46.259608] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:15.691 accel_perf options: 00:07:15.691 [-h help message] 00:07:15.691 [-q queue depth per core] 00:07:15.691 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:15.691 [-T number of threads per core 00:07:15.691 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:15.691 [-t time in seconds] 00:07:15.691 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:15.691 [ dif_verify, , dif_generate, dif_generate_copy 00:07:15.691 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:15.691 [-l for compress/decompress workloads, name of uncompressed input file 00:07:15.691 [-S for crc32c workload, use this seed value (default 0) 00:07:15.691 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:15.691 [-f for fill workload, use this BYTE value (default 255) 00:07:15.691 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:15.691 [-y verify result if this switch is on] 00:07:15.691 [-a tasks to allocate per core (default: same value as -q)] 00:07:15.691 Can be used to spread operations across a wider range of memory. 00:07:15.691 14:57:46 -- common/autotest_common.sh@653 -- # es=1 00:07:15.691 14:57:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.691 14:57:46 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:15.691 14:57:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.691 00:07:15.691 real 0m0.032s 00:07:15.691 user 0m0.018s 00:07:15.691 sys 0m0.013s 00:07:15.691 14:57:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.691 14:57:46 -- common/autotest_common.sh@10 -- # set +x 00:07:15.691 ************************************ 00:07:15.691 END TEST accel_wrong_workload 00:07:15.691 ************************************ 00:07:15.691 14:57:46 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:15.691 14:57:46 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:15.691 14:57:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.691 14:57:46 -- common/autotest_common.sh@10 -- # set +x 00:07:15.691 ************************************ 00:07:15.691 START TEST accel_negative_buffers 00:07:15.691 ************************************ 00:07:15.691 14:57:46 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:15.691 14:57:46 -- common/autotest_common.sh@650 -- # local es=0 00:07:15.691 14:57:46 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:15.691 14:57:46 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:15.691 14:57:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.691 14:57:46 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:15.691 14:57:46 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:15.691 14:57:46 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:07:15.691 14:57:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:15.691 14:57:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.691 14:57:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.691 14:57:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.691 14:57:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.691 14:57:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.691 14:57:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.691 14:57:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.691 14:57:46 -- accel/accel.sh@42 -- # jq -r . 00:07:15.691 -x option must be non-negative. 00:07:15.691 [2024-11-20 14:57:46.332284] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:15.691 accel_perf options: 00:07:15.691 [-h help message] 00:07:15.691 [-q queue depth per core] 00:07:15.691 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:15.691 [-T number of threads per core 00:07:15.691 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:15.691 [-t time in seconds] 00:07:15.691 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:15.691 [ dif_verify, , dif_generate, dif_generate_copy 00:07:15.691 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:15.691 [-l for compress/decompress workloads, name of uncompressed input file 00:07:15.691 [-S for crc32c workload, use this seed value (default 0) 00:07:15.691 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:15.691 [-f for fill workload, use this BYTE value (default 255) 00:07:15.691 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:15.691 [-y verify result if this switch is on] 00:07:15.691 [-a tasks to allocate per core (default: same value as -q)] 00:07:15.691 Can be used to spread operations across a wider range of memory. 00:07:15.691 14:57:46 -- common/autotest_common.sh@653 -- # es=1 00:07:15.691 14:57:46 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:15.691 14:57:46 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:15.691 14:57:46 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:15.691 00:07:15.691 real 0m0.029s 00:07:15.691 user 0m0.017s 00:07:15.691 sys 0m0.012s 00:07:15.691 14:57:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:15.691 14:57:46 -- common/autotest_common.sh@10 -- # set +x 00:07:15.691 ************************************ 00:07:15.691 END TEST accel_negative_buffers 00:07:15.691 ************************************ 00:07:15.691 14:57:46 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:15.691 14:57:46 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:15.691 14:57:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.691 14:57:46 -- common/autotest_common.sh@10 -- # set +x 00:07:15.692 ************************************ 00:07:15.692 START TEST accel_crc32c 00:07:15.692 ************************************ 00:07:15.692 14:57:46 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:15.692 14:57:46 -- accel/accel.sh@16 -- # local accel_opc 00:07:15.692 14:57:46 -- accel/accel.sh@17 -- # local accel_module 00:07:15.692 14:57:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:15.692 14:57:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:15.692 14:57:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.692 14:57:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.692 14:57:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.692 14:57:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.692 14:57:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.692 14:57:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.692 14:57:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.692 14:57:46 -- accel/accel.sh@42 -- # jq -r . 00:07:15.692 [2024-11-20 14:57:46.403956] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:15.692 [2024-11-20 14:57:46.404040] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67997 ] 00:07:15.950 [2024-11-20 14:57:46.536242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.950 [2024-11-20 14:57:46.573391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.327 14:57:47 -- accel/accel.sh@18 -- # out=' 00:07:17.327 SPDK Configuration: 00:07:17.327 Core mask: 0x1 00:07:17.327 00:07:17.327 Accel Perf Configuration: 00:07:17.327 Workload Type: crc32c 00:07:17.327 CRC-32C seed: 32 00:07:17.327 Transfer size: 4096 bytes 00:07:17.327 Vector count 1 00:07:17.327 Module: software 00:07:17.327 Queue depth: 32 00:07:17.327 Allocate depth: 32 00:07:17.327 # threads/core: 1 00:07:17.327 Run time: 1 seconds 00:07:17.327 Verify: Yes 00:07:17.327 00:07:17.327 Running for 1 seconds... 00:07:17.327 00:07:17.327 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:17.327 ------------------------------------------------------------------------------------ 00:07:17.327 0,0 422496/s 1650 MiB/s 0 0 00:07:17.327 ==================================================================================== 00:07:17.327 Total 422496/s 1650 MiB/s 0 0' 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # IFS=: 00:07:17.327 14:57:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # read -r var val 00:07:17.327 14:57:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:17.327 14:57:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.327 14:57:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.327 14:57:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.327 14:57:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.327 14:57:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.327 14:57:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.327 14:57:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.327 14:57:47 -- accel/accel.sh@42 -- # jq -r . 00:07:17.327 [2024-11-20 14:57:47.727252] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:17.327 [2024-11-20 14:57:47.727411] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68011 ] 00:07:17.327 [2024-11-20 14:57:47.871795] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.327 [2024-11-20 14:57:47.906871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.327 14:57:47 -- accel/accel.sh@21 -- # val= 00:07:17.327 14:57:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # IFS=: 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # read -r var val 00:07:17.327 14:57:47 -- accel/accel.sh@21 -- # val= 00:07:17.327 14:57:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # IFS=: 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # read -r var val 00:07:17.327 14:57:47 -- accel/accel.sh@21 -- # val=0x1 00:07:17.327 14:57:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # IFS=: 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # read -r var val 00:07:17.327 14:57:47 -- accel/accel.sh@21 -- # val= 00:07:17.327 14:57:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # IFS=: 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # read -r var val 00:07:17.327 14:57:47 -- accel/accel.sh@21 -- # val= 00:07:17.327 14:57:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # IFS=: 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # read -r var val 00:07:17.327 14:57:47 -- accel/accel.sh@21 -- # val=crc32c 00:07:17.327 14:57:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.327 14:57:47 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # IFS=: 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # read -r var val 00:07:17.327 14:57:47 -- accel/accel.sh@21 -- # val=32 00:07:17.327 14:57:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # IFS=: 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # read -r var val 00:07:17.327 14:57:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:17.327 14:57:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # IFS=: 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # read -r var val 00:07:17.327 14:57:47 -- accel/accel.sh@21 -- # val= 00:07:17.327 14:57:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # IFS=: 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # read -r var val 00:07:17.327 14:57:47 -- accel/accel.sh@21 -- # val=software 00:07:17.327 14:57:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.327 14:57:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # IFS=: 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # read -r var val 00:07:17.327 14:57:47 -- accel/accel.sh@21 -- # val=32 00:07:17.327 14:57:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # IFS=: 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # read -r var val 00:07:17.327 14:57:47 -- accel/accel.sh@21 -- # val=32 00:07:17.327 14:57:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # IFS=: 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # read -r var val 00:07:17.327 14:57:47 -- accel/accel.sh@21 -- # val=1 00:07:17.327 14:57:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # IFS=: 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # read -r var val 00:07:17.327 14:57:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:17.327 14:57:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # IFS=: 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # read -r var val 00:07:17.327 14:57:47 -- accel/accel.sh@21 -- # val=Yes 00:07:17.327 14:57:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # IFS=: 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # read -r var val 00:07:17.327 14:57:47 -- accel/accel.sh@21 -- # val= 00:07:17.327 14:57:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # IFS=: 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # read -r var val 00:07:17.327 14:57:47 -- accel/accel.sh@21 -- # val= 00:07:17.327 14:57:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # IFS=: 00:07:17.327 14:57:47 -- accel/accel.sh@20 -- # read -r var val 00:07:18.264 14:57:49 -- accel/accel.sh@21 -- # val= 00:07:18.264 14:57:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.264 14:57:49 -- accel/accel.sh@20 -- # IFS=: 00:07:18.264 14:57:49 -- accel/accel.sh@20 -- # read -r var val 00:07:18.264 14:57:49 -- accel/accel.sh@21 -- # val= 00:07:18.264 14:57:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.264 14:57:49 -- accel/accel.sh@20 -- # IFS=: 00:07:18.264 14:57:49 -- accel/accel.sh@20 -- # read -r var val 00:07:18.264 14:57:49 -- accel/accel.sh@21 -- # val= 00:07:18.264 14:57:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.264 14:57:49 -- accel/accel.sh@20 -- # IFS=: 00:07:18.264 14:57:49 -- accel/accel.sh@20 -- # read -r var val 00:07:18.264 14:57:49 -- accel/accel.sh@21 -- # val= 00:07:18.264 14:57:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.264 14:57:49 -- accel/accel.sh@20 -- # IFS=: 00:07:18.264 14:57:49 -- accel/accel.sh@20 -- # read -r var val 00:07:18.264 14:57:49 -- accel/accel.sh@21 -- # val= 00:07:18.264 14:57:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.264 14:57:49 -- accel/accel.sh@20 -- # IFS=: 00:07:18.264 14:57:49 -- accel/accel.sh@20 -- # read -r var val 00:07:18.264 14:57:49 -- accel/accel.sh@21 -- # val= 00:07:18.264 14:57:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.264 14:57:49 -- accel/accel.sh@20 -- # IFS=: 00:07:18.264 14:57:49 -- accel/accel.sh@20 -- # read -r var val 00:07:18.264 14:57:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:18.264 14:57:49 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:18.264 14:57:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.264 00:07:18.264 real 0m2.659s 00:07:18.264 user 0m2.295s 00:07:18.264 sys 0m0.157s 00:07:18.264 14:57:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:18.264 ************************************ 00:07:18.264 END TEST accel_crc32c 00:07:18.264 ************************************ 00:07:18.264 14:57:49 -- common/autotest_common.sh@10 -- # set +x 00:07:18.523 14:57:49 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:18.523 14:57:49 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:18.523 14:57:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.523 14:57:49 -- common/autotest_common.sh@10 -- # set +x 00:07:18.523 ************************************ 00:07:18.523 START TEST accel_crc32c_C2 00:07:18.523 ************************************ 00:07:18.523 14:57:49 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:18.523 14:57:49 -- accel/accel.sh@16 -- # local accel_opc 00:07:18.523 14:57:49 -- accel/accel.sh@17 -- # local accel_module 00:07:18.523 14:57:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:18.523 14:57:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.523 14:57:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:18.523 14:57:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.523 14:57:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.523 14:57:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.523 14:57:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.523 14:57:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.523 14:57:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.523 14:57:49 -- accel/accel.sh@42 -- # jq -r . 00:07:18.523 [2024-11-20 14:57:49.106364] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:18.523 [2024-11-20 14:57:49.106452] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68046 ] 00:07:18.523 [2024-11-20 14:57:49.238864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.523 [2024-11-20 14:57:49.273867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.900 14:57:50 -- accel/accel.sh@18 -- # out=' 00:07:19.900 SPDK Configuration: 00:07:19.900 Core mask: 0x1 00:07:19.900 00:07:19.900 Accel Perf Configuration: 00:07:19.900 Workload Type: crc32c 00:07:19.900 CRC-32C seed: 0 00:07:19.900 Transfer size: 4096 bytes 00:07:19.900 Vector count 2 00:07:19.900 Module: software 00:07:19.900 Queue depth: 32 00:07:19.900 Allocate depth: 32 00:07:19.900 # threads/core: 1 00:07:19.900 Run time: 1 seconds 00:07:19.900 Verify: Yes 00:07:19.900 00:07:19.900 Running for 1 seconds... 00:07:19.900 00:07:19.900 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:19.900 ------------------------------------------------------------------------------------ 00:07:19.900 0,0 327776/s 2560 MiB/s 0 0 00:07:19.900 ==================================================================================== 00:07:19.900 Total 327776/s 1280 MiB/s 0 0' 00:07:19.900 14:57:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.900 14:57:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.900 14:57:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:19.900 14:57:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:19.900 14:57:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.900 14:57:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.900 14:57:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.900 14:57:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.900 14:57:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.900 14:57:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.900 14:57:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.900 14:57:50 -- accel/accel.sh@42 -- # jq -r . 00:07:19.900 [2024-11-20 14:57:50.428058] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:19.900 [2024-11-20 14:57:50.428307] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68060 ] 00:07:19.900 [2024-11-20 14:57:50.567163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.900 [2024-11-20 14:57:50.608246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.900 14:57:50 -- accel/accel.sh@21 -- # val= 00:07:19.900 14:57:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.900 14:57:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.900 14:57:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.900 14:57:50 -- accel/accel.sh@21 -- # val= 00:07:19.900 14:57:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.900 14:57:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.900 14:57:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.900 14:57:50 -- accel/accel.sh@21 -- # val=0x1 00:07:19.900 14:57:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.900 14:57:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.900 14:57:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.900 14:57:50 -- accel/accel.sh@21 -- # val= 00:07:19.900 14:57:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.900 14:57:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.900 14:57:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.900 14:57:50 -- accel/accel.sh@21 -- # val= 00:07:19.900 14:57:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.900 14:57:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.900 14:57:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.900 14:57:50 -- accel/accel.sh@21 -- # val=crc32c 00:07:19.900 14:57:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.900 14:57:50 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:19.900 14:57:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.900 14:57:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.900 14:57:50 -- accel/accel.sh@21 -- # val=0 00:07:19.900 14:57:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.900 14:57:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.900 14:57:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.900 14:57:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:19.900 14:57:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.900 14:57:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.900 14:57:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.900 14:57:50 -- accel/accel.sh@21 -- # val= 00:07:19.900 14:57:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.900 14:57:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.900 14:57:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.901 14:57:50 -- accel/accel.sh@21 -- # val=software 00:07:19.901 14:57:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.901 14:57:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:19.901 14:57:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.901 14:57:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.901 14:57:50 -- accel/accel.sh@21 -- # val=32 00:07:19.901 14:57:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.901 14:57:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.901 14:57:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.901 14:57:50 -- accel/accel.sh@21 -- # val=32 00:07:19.901 14:57:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.901 14:57:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.901 14:57:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.901 14:57:50 -- accel/accel.sh@21 -- # val=1 00:07:19.901 14:57:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.901 14:57:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.901 14:57:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.901 14:57:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:19.901 14:57:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.901 14:57:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.901 14:57:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.901 14:57:50 -- accel/accel.sh@21 -- # val=Yes 00:07:19.901 14:57:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.901 14:57:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.901 14:57:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.901 14:57:50 -- accel/accel.sh@21 -- # val= 00:07:19.901 14:57:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.901 14:57:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.901 14:57:50 -- accel/accel.sh@20 -- # read -r var val 00:07:19.901 14:57:50 -- accel/accel.sh@21 -- # val= 00:07:19.901 14:57:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.901 14:57:50 -- accel/accel.sh@20 -- # IFS=: 00:07:19.901 14:57:50 -- accel/accel.sh@20 -- # read -r var val 00:07:21.277 14:57:51 -- accel/accel.sh@21 -- # val= 00:07:21.277 14:57:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.277 14:57:51 -- accel/accel.sh@20 -- # IFS=: 00:07:21.277 14:57:51 -- accel/accel.sh@20 -- # read -r var val 00:07:21.277 14:57:51 -- accel/accel.sh@21 -- # val= 00:07:21.277 14:57:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.277 14:57:51 -- accel/accel.sh@20 -- # IFS=: 00:07:21.277 14:57:51 -- accel/accel.sh@20 -- # read -r var val 00:07:21.277 14:57:51 -- accel/accel.sh@21 -- # val= 00:07:21.277 14:57:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.277 14:57:51 -- accel/accel.sh@20 -- # IFS=: 00:07:21.277 14:57:51 -- accel/accel.sh@20 -- # read -r var val 00:07:21.277 14:57:51 -- accel/accel.sh@21 -- # val= 00:07:21.277 14:57:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.277 14:57:51 -- accel/accel.sh@20 -- # IFS=: 00:07:21.277 14:57:51 -- accel/accel.sh@20 -- # read -r var val 00:07:21.277 14:57:51 -- accel/accel.sh@21 -- # val= 00:07:21.277 14:57:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.277 14:57:51 -- accel/accel.sh@20 -- # IFS=: 00:07:21.277 14:57:51 -- accel/accel.sh@20 -- # read -r var val 00:07:21.277 14:57:51 -- accel/accel.sh@21 -- # val= 00:07:21.277 14:57:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.277 14:57:51 -- accel/accel.sh@20 -- # IFS=: 00:07:21.277 14:57:51 -- accel/accel.sh@20 -- # read -r var val 00:07:21.277 14:57:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:21.277 14:57:51 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:21.277 14:57:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.277 00:07:21.277 real 0m2.660s 00:07:21.277 user 0m2.300s 00:07:21.277 sys 0m0.154s 00:07:21.277 14:57:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.277 ************************************ 00:07:21.277 END TEST accel_crc32c_C2 00:07:21.277 ************************************ 00:07:21.277 14:57:51 -- common/autotest_common.sh@10 -- # set +x 00:07:21.277 14:57:51 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:21.277 14:57:51 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:21.277 14:57:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.277 14:57:51 -- common/autotest_common.sh@10 -- # set +x 00:07:21.277 ************************************ 00:07:21.277 START TEST accel_copy 00:07:21.277 ************************************ 00:07:21.277 14:57:51 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:07:21.277 14:57:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:21.277 14:57:51 -- accel/accel.sh@17 -- # local accel_module 00:07:21.277 14:57:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:21.277 14:57:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.277 14:57:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:21.277 14:57:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.277 14:57:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.277 14:57:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.277 14:57:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.277 14:57:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.277 14:57:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.277 14:57:51 -- accel/accel.sh@42 -- # jq -r . 00:07:21.277 [2024-11-20 14:57:51.818374] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:21.277 [2024-11-20 14:57:51.818711] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68094 ] 00:07:21.277 [2024-11-20 14:57:51.953362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.277 [2024-11-20 14:57:51.989843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.758 14:57:53 -- accel/accel.sh@18 -- # out=' 00:07:22.758 SPDK Configuration: 00:07:22.758 Core mask: 0x1 00:07:22.758 00:07:22.758 Accel Perf Configuration: 00:07:22.758 Workload Type: copy 00:07:22.758 Transfer size: 4096 bytes 00:07:22.758 Vector count 1 00:07:22.758 Module: software 00:07:22.758 Queue depth: 32 00:07:22.758 Allocate depth: 32 00:07:22.758 # threads/core: 1 00:07:22.758 Run time: 1 seconds 00:07:22.758 Verify: Yes 00:07:22.758 00:07:22.758 Running for 1 seconds... 00:07:22.758 00:07:22.758 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:22.758 ------------------------------------------------------------------------------------ 00:07:22.758 0,0 291104/s 1137 MiB/s 0 0 00:07:22.758 ==================================================================================== 00:07:22.758 Total 291104/s 1137 MiB/s 0 0' 00:07:22.758 14:57:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # IFS=: 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # read -r var val 00:07:22.758 14:57:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:22.758 14:57:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.758 14:57:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.758 14:57:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.758 14:57:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.758 14:57:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.758 14:57:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.758 14:57:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.758 14:57:53 -- accel/accel.sh@42 -- # jq -r . 00:07:22.758 [2024-11-20 14:57:53.148783] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:22.758 [2024-11-20 14:57:53.148887] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68114 ] 00:07:22.758 [2024-11-20 14:57:53.288492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.758 [2024-11-20 14:57:53.324072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.758 14:57:53 -- accel/accel.sh@21 -- # val= 00:07:22.758 14:57:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # IFS=: 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # read -r var val 00:07:22.758 14:57:53 -- accel/accel.sh@21 -- # val= 00:07:22.758 14:57:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # IFS=: 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # read -r var val 00:07:22.758 14:57:53 -- accel/accel.sh@21 -- # val=0x1 00:07:22.758 14:57:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # IFS=: 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # read -r var val 00:07:22.758 14:57:53 -- accel/accel.sh@21 -- # val= 00:07:22.758 14:57:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # IFS=: 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # read -r var val 00:07:22.758 14:57:53 -- accel/accel.sh@21 -- # val= 00:07:22.758 14:57:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # IFS=: 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # read -r var val 00:07:22.758 14:57:53 -- accel/accel.sh@21 -- # val=copy 00:07:22.758 14:57:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.758 14:57:53 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # IFS=: 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # read -r var val 00:07:22.758 14:57:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:22.758 14:57:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # IFS=: 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # read -r var val 00:07:22.758 14:57:53 -- accel/accel.sh@21 -- # val= 00:07:22.758 14:57:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # IFS=: 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # read -r var val 00:07:22.758 14:57:53 -- accel/accel.sh@21 -- # val=software 00:07:22.758 14:57:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.758 14:57:53 -- accel/accel.sh@23 -- # accel_module=software 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # IFS=: 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # read -r var val 00:07:22.758 14:57:53 -- accel/accel.sh@21 -- # val=32 00:07:22.758 14:57:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # IFS=: 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # read -r var val 00:07:22.758 14:57:53 -- accel/accel.sh@21 -- # val=32 00:07:22.758 14:57:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # IFS=: 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # read -r var val 00:07:22.758 14:57:53 -- accel/accel.sh@21 -- # val=1 00:07:22.758 14:57:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # IFS=: 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # read -r var val 00:07:22.758 14:57:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:22.758 14:57:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # IFS=: 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # read -r var val 00:07:22.758 14:57:53 -- accel/accel.sh@21 -- # val=Yes 00:07:22.758 14:57:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # IFS=: 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # read -r var val 00:07:22.758 14:57:53 -- accel/accel.sh@21 -- # val= 00:07:22.758 14:57:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # IFS=: 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # read -r var val 00:07:22.758 14:57:53 -- accel/accel.sh@21 -- # val= 00:07:22.758 14:57:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # IFS=: 00:07:22.758 14:57:53 -- accel/accel.sh@20 -- # read -r var val 00:07:23.694 14:57:54 -- accel/accel.sh@21 -- # val= 00:07:23.694 14:57:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.694 14:57:54 -- accel/accel.sh@20 -- # IFS=: 00:07:23.694 14:57:54 -- accel/accel.sh@20 -- # read -r var val 00:07:23.694 14:57:54 -- accel/accel.sh@21 -- # val= 00:07:23.694 14:57:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.694 14:57:54 -- accel/accel.sh@20 -- # IFS=: 00:07:23.694 14:57:54 -- accel/accel.sh@20 -- # read -r var val 00:07:23.694 14:57:54 -- accel/accel.sh@21 -- # val= 00:07:23.694 14:57:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.694 14:57:54 -- accel/accel.sh@20 -- # IFS=: 00:07:23.694 14:57:54 -- accel/accel.sh@20 -- # read -r var val 00:07:23.694 14:57:54 -- accel/accel.sh@21 -- # val= 00:07:23.694 14:57:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.694 14:57:54 -- accel/accel.sh@20 -- # IFS=: 00:07:23.694 14:57:54 -- accel/accel.sh@20 -- # read -r var val 00:07:23.694 14:57:54 -- accel/accel.sh@21 -- # val= 00:07:23.694 14:57:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.694 14:57:54 -- accel/accel.sh@20 -- # IFS=: 00:07:23.694 14:57:54 -- accel/accel.sh@20 -- # read -r var val 00:07:23.694 14:57:54 -- accel/accel.sh@21 -- # val= 00:07:23.694 14:57:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.694 14:57:54 -- accel/accel.sh@20 -- # IFS=: 00:07:23.694 14:57:54 -- accel/accel.sh@20 -- # read -r var val 00:07:23.694 14:57:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:23.694 14:57:54 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:23.694 14:57:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.694 00:07:23.694 real 0m2.666s 00:07:23.694 user 0m2.304s 00:07:23.694 sys 0m0.157s 00:07:23.694 ************************************ 00:07:23.694 END TEST accel_copy 00:07:23.694 ************************************ 00:07:23.694 14:57:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:23.694 14:57:54 -- common/autotest_common.sh@10 -- # set +x 00:07:23.953 14:57:54 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:23.954 14:57:54 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:23.954 14:57:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.954 14:57:54 -- common/autotest_common.sh@10 -- # set +x 00:07:23.954 ************************************ 00:07:23.954 START TEST accel_fill 00:07:23.954 ************************************ 00:07:23.954 14:57:54 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:23.954 14:57:54 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.954 14:57:54 -- accel/accel.sh@17 -- # local accel_module 00:07:23.954 14:57:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:23.954 14:57:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:23.954 14:57:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.954 14:57:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.954 14:57:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.954 14:57:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.954 14:57:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.954 14:57:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.954 14:57:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.954 14:57:54 -- accel/accel.sh@42 -- # jq -r . 00:07:23.954 [2024-11-20 14:57:54.532462] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:23.954 [2024-11-20 14:57:54.532566] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68143 ] 00:07:23.954 [2024-11-20 14:57:54.670197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.954 [2024-11-20 14:57:54.708949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.330 14:57:55 -- accel/accel.sh@18 -- # out=' 00:07:25.330 SPDK Configuration: 00:07:25.330 Core mask: 0x1 00:07:25.330 00:07:25.330 Accel Perf Configuration: 00:07:25.330 Workload Type: fill 00:07:25.330 Fill pattern: 0x80 00:07:25.330 Transfer size: 4096 bytes 00:07:25.330 Vector count 1 00:07:25.330 Module: software 00:07:25.330 Queue depth: 64 00:07:25.331 Allocate depth: 64 00:07:25.331 # threads/core: 1 00:07:25.331 Run time: 1 seconds 00:07:25.331 Verify: Yes 00:07:25.331 00:07:25.331 Running for 1 seconds... 00:07:25.331 00:07:25.331 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:25.331 ------------------------------------------------------------------------------------ 00:07:25.331 0,0 417088/s 1629 MiB/s 0 0 00:07:25.331 ==================================================================================== 00:07:25.331 Total 417088/s 1629 MiB/s 0 0' 00:07:25.331 14:57:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.331 14:57:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:25.331 14:57:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.331 14:57:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:25.331 14:57:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.331 14:57:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.331 14:57:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.331 14:57:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.331 14:57:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.331 14:57:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.331 14:57:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.331 14:57:55 -- accel/accel.sh@42 -- # jq -r . 00:07:25.331 [2024-11-20 14:57:55.860069] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:25.331 [2024-11-20 14:57:55.860169] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68162 ] 00:07:25.331 [2024-11-20 14:57:56.002162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.331 [2024-11-20 14:57:56.042517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.331 14:57:56 -- accel/accel.sh@21 -- # val= 00:07:25.331 14:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # IFS=: 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # read -r var val 00:07:25.331 14:57:56 -- accel/accel.sh@21 -- # val= 00:07:25.331 14:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # IFS=: 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # read -r var val 00:07:25.331 14:57:56 -- accel/accel.sh@21 -- # val=0x1 00:07:25.331 14:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # IFS=: 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # read -r var val 00:07:25.331 14:57:56 -- accel/accel.sh@21 -- # val= 00:07:25.331 14:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # IFS=: 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # read -r var val 00:07:25.331 14:57:56 -- accel/accel.sh@21 -- # val= 00:07:25.331 14:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # IFS=: 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # read -r var val 00:07:25.331 14:57:56 -- accel/accel.sh@21 -- # val=fill 00:07:25.331 14:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.331 14:57:56 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # IFS=: 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # read -r var val 00:07:25.331 14:57:56 -- accel/accel.sh@21 -- # val=0x80 00:07:25.331 14:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # IFS=: 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # read -r var val 00:07:25.331 14:57:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:25.331 14:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # IFS=: 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # read -r var val 00:07:25.331 14:57:56 -- accel/accel.sh@21 -- # val= 00:07:25.331 14:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # IFS=: 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # read -r var val 00:07:25.331 14:57:56 -- accel/accel.sh@21 -- # val=software 00:07:25.331 14:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.331 14:57:56 -- accel/accel.sh@23 -- # accel_module=software 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # IFS=: 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # read -r var val 00:07:25.331 14:57:56 -- accel/accel.sh@21 -- # val=64 00:07:25.331 14:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # IFS=: 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # read -r var val 00:07:25.331 14:57:56 -- accel/accel.sh@21 -- # val=64 00:07:25.331 14:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # IFS=: 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # read -r var val 00:07:25.331 14:57:56 -- accel/accel.sh@21 -- # val=1 00:07:25.331 14:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # IFS=: 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # read -r var val 00:07:25.331 14:57:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:25.331 14:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # IFS=: 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # read -r var val 00:07:25.331 14:57:56 -- accel/accel.sh@21 -- # val=Yes 00:07:25.331 14:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # IFS=: 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # read -r var val 00:07:25.331 14:57:56 -- accel/accel.sh@21 -- # val= 00:07:25.331 14:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # IFS=: 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # read -r var val 00:07:25.331 14:57:56 -- accel/accel.sh@21 -- # val= 00:07:25.331 14:57:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # IFS=: 00:07:25.331 14:57:56 -- accel/accel.sh@20 -- # read -r var val 00:07:26.708 14:57:57 -- accel/accel.sh@21 -- # val= 00:07:26.708 14:57:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.708 14:57:57 -- accel/accel.sh@20 -- # IFS=: 00:07:26.708 14:57:57 -- accel/accel.sh@20 -- # read -r var val 00:07:26.708 14:57:57 -- accel/accel.sh@21 -- # val= 00:07:26.708 14:57:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.708 14:57:57 -- accel/accel.sh@20 -- # IFS=: 00:07:26.708 14:57:57 -- accel/accel.sh@20 -- # read -r var val 00:07:26.708 14:57:57 -- accel/accel.sh@21 -- # val= 00:07:26.708 14:57:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.708 14:57:57 -- accel/accel.sh@20 -- # IFS=: 00:07:26.708 14:57:57 -- accel/accel.sh@20 -- # read -r var val 00:07:26.708 14:57:57 -- accel/accel.sh@21 -- # val= 00:07:26.708 ************************************ 00:07:26.708 END TEST accel_fill 00:07:26.708 ************************************ 00:07:26.708 14:57:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.708 14:57:57 -- accel/accel.sh@20 -- # IFS=: 00:07:26.708 14:57:57 -- accel/accel.sh@20 -- # read -r var val 00:07:26.708 14:57:57 -- accel/accel.sh@21 -- # val= 00:07:26.708 14:57:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.708 14:57:57 -- accel/accel.sh@20 -- # IFS=: 00:07:26.708 14:57:57 -- accel/accel.sh@20 -- # read -r var val 00:07:26.708 14:57:57 -- accel/accel.sh@21 -- # val= 00:07:26.708 14:57:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.708 14:57:57 -- accel/accel.sh@20 -- # IFS=: 00:07:26.708 14:57:57 -- accel/accel.sh@20 -- # read -r var val 00:07:26.708 14:57:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:26.708 14:57:57 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:26.708 14:57:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.708 00:07:26.708 real 0m2.680s 00:07:26.708 user 0m2.313s 00:07:26.708 sys 0m0.156s 00:07:26.708 14:57:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:26.708 14:57:57 -- common/autotest_common.sh@10 -- # set +x 00:07:26.708 14:57:57 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:26.708 14:57:57 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:26.708 14:57:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:26.708 14:57:57 -- common/autotest_common.sh@10 -- # set +x 00:07:26.708 ************************************ 00:07:26.708 START TEST accel_copy_crc32c 00:07:26.708 ************************************ 00:07:26.708 14:57:57 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:07:26.708 14:57:57 -- accel/accel.sh@16 -- # local accel_opc 00:07:26.708 14:57:57 -- accel/accel.sh@17 -- # local accel_module 00:07:26.708 14:57:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:26.708 14:57:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:26.708 14:57:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.708 14:57:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.708 14:57:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.708 14:57:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.708 14:57:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.708 14:57:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.708 14:57:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.708 14:57:57 -- accel/accel.sh@42 -- # jq -r . 00:07:26.708 [2024-11-20 14:57:57.264190] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:26.708 [2024-11-20 14:57:57.264284] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68197 ] 00:07:26.708 [2024-11-20 14:57:57.400275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.708 [2024-11-20 14:57:57.441704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.084 14:57:58 -- accel/accel.sh@18 -- # out=' 00:07:28.084 SPDK Configuration: 00:07:28.084 Core mask: 0x1 00:07:28.084 00:07:28.084 Accel Perf Configuration: 00:07:28.084 Workload Type: copy_crc32c 00:07:28.084 CRC-32C seed: 0 00:07:28.084 Vector size: 4096 bytes 00:07:28.084 Transfer size: 4096 bytes 00:07:28.084 Vector count 1 00:07:28.084 Module: software 00:07:28.084 Queue depth: 32 00:07:28.084 Allocate depth: 32 00:07:28.084 # threads/core: 1 00:07:28.084 Run time: 1 seconds 00:07:28.084 Verify: Yes 00:07:28.084 00:07:28.084 Running for 1 seconds... 00:07:28.084 00:07:28.085 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:28.085 ------------------------------------------------------------------------------------ 00:07:28.085 0,0 212512/s 830 MiB/s 0 0 00:07:28.085 ==================================================================================== 00:07:28.085 Total 212512/s 830 MiB/s 0 0' 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.085 14:57:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:28.085 14:57:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:28.085 14:57:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.085 14:57:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.085 14:57:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.085 14:57:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.085 14:57:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.085 14:57:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.085 14:57:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.085 14:57:58 -- accel/accel.sh@42 -- # jq -r . 00:07:28.085 [2024-11-20 14:57:58.611871] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:28.085 [2024-11-20 14:57:58.612289] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68213 ] 00:07:28.085 [2024-11-20 14:57:58.751528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.085 [2024-11-20 14:57:58.788797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.085 14:57:58 -- accel/accel.sh@21 -- # val= 00:07:28.085 14:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.085 14:57:58 -- accel/accel.sh@21 -- # val= 00:07:28.085 14:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.085 14:57:58 -- accel/accel.sh@21 -- # val=0x1 00:07:28.085 14:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.085 14:57:58 -- accel/accel.sh@21 -- # val= 00:07:28.085 14:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.085 14:57:58 -- accel/accel.sh@21 -- # val= 00:07:28.085 14:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.085 14:57:58 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:28.085 14:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.085 14:57:58 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.085 14:57:58 -- accel/accel.sh@21 -- # val=0 00:07:28.085 14:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.085 14:57:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:28.085 14:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.085 14:57:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:28.085 14:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.085 14:57:58 -- accel/accel.sh@21 -- # val= 00:07:28.085 14:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.085 14:57:58 -- accel/accel.sh@21 -- # val=software 00:07:28.085 14:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.085 14:57:58 -- accel/accel.sh@23 -- # accel_module=software 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.085 14:57:58 -- accel/accel.sh@21 -- # val=32 00:07:28.085 14:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.085 14:57:58 -- accel/accel.sh@21 -- # val=32 00:07:28.085 14:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.085 14:57:58 -- accel/accel.sh@21 -- # val=1 00:07:28.085 14:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.085 14:57:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:28.085 14:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.085 14:57:58 -- accel/accel.sh@21 -- # val=Yes 00:07:28.085 14:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.085 14:57:58 -- accel/accel.sh@21 -- # val= 00:07:28.085 14:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.085 14:57:58 -- accel/accel.sh@21 -- # val= 00:07:28.085 14:57:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.085 14:57:58 -- accel/accel.sh@20 -- # read -r var val 00:07:29.463 14:57:59 -- accel/accel.sh@21 -- # val= 00:07:29.463 14:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.463 14:57:59 -- accel/accel.sh@20 -- # IFS=: 00:07:29.463 14:57:59 -- accel/accel.sh@20 -- # read -r var val 00:07:29.463 14:57:59 -- accel/accel.sh@21 -- # val= 00:07:29.463 14:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.463 14:57:59 -- accel/accel.sh@20 -- # IFS=: 00:07:29.463 14:57:59 -- accel/accel.sh@20 -- # read -r var val 00:07:29.463 14:57:59 -- accel/accel.sh@21 -- # val= 00:07:29.463 14:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.463 14:57:59 -- accel/accel.sh@20 -- # IFS=: 00:07:29.463 14:57:59 -- accel/accel.sh@20 -- # read -r var val 00:07:29.463 14:57:59 -- accel/accel.sh@21 -- # val= 00:07:29.463 14:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.463 14:57:59 -- accel/accel.sh@20 -- # IFS=: 00:07:29.463 14:57:59 -- accel/accel.sh@20 -- # read -r var val 00:07:29.463 14:57:59 -- accel/accel.sh@21 -- # val= 00:07:29.463 14:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.463 14:57:59 -- accel/accel.sh@20 -- # IFS=: 00:07:29.463 14:57:59 -- accel/accel.sh@20 -- # read -r var val 00:07:29.463 14:57:59 -- accel/accel.sh@21 -- # val= 00:07:29.463 14:57:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.463 14:57:59 -- accel/accel.sh@20 -- # IFS=: 00:07:29.463 14:57:59 -- accel/accel.sh@20 -- # read -r var val 00:07:29.463 14:57:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:29.463 14:57:59 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:29.463 14:57:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.463 00:07:29.463 real 0m2.696s 00:07:29.463 user 0m2.326s 00:07:29.463 sys 0m0.156s 00:07:29.463 ************************************ 00:07:29.463 END TEST accel_copy_crc32c 00:07:29.463 ************************************ 00:07:29.463 14:57:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:29.463 14:57:59 -- common/autotest_common.sh@10 -- # set +x 00:07:29.463 14:57:59 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:29.463 14:57:59 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:29.463 14:57:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.463 14:57:59 -- common/autotest_common.sh@10 -- # set +x 00:07:29.463 ************************************ 00:07:29.463 START TEST accel_copy_crc32c_C2 00:07:29.463 ************************************ 00:07:29.463 14:57:59 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:29.463 14:57:59 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.463 14:57:59 -- accel/accel.sh@17 -- # local accel_module 00:07:29.463 14:57:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:29.463 14:57:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:29.463 14:57:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.463 14:57:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.463 14:57:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.463 14:57:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.463 14:57:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.463 14:57:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.463 14:57:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.463 14:57:59 -- accel/accel.sh@42 -- # jq -r . 00:07:29.463 [2024-11-20 14:58:00.014300] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:29.463 [2024-11-20 14:58:00.014671] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68247 ] 00:07:29.463 [2024-11-20 14:58:00.150127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.463 [2024-11-20 14:58:00.189918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.841 14:58:01 -- accel/accel.sh@18 -- # out=' 00:07:30.841 SPDK Configuration: 00:07:30.841 Core mask: 0x1 00:07:30.841 00:07:30.841 Accel Perf Configuration: 00:07:30.841 Workload Type: copy_crc32c 00:07:30.841 CRC-32C seed: 0 00:07:30.841 Vector size: 4096 bytes 00:07:30.841 Transfer size: 8192 bytes 00:07:30.841 Vector count 2 00:07:30.841 Module: software 00:07:30.841 Queue depth: 32 00:07:30.841 Allocate depth: 32 00:07:30.841 # threads/core: 1 00:07:30.841 Run time: 1 seconds 00:07:30.841 Verify: Yes 00:07:30.841 00:07:30.841 Running for 1 seconds... 00:07:30.841 00:07:30.841 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:30.841 ------------------------------------------------------------------------------------ 00:07:30.841 0,0 162048/s 1266 MiB/s 0 0 00:07:30.841 ==================================================================================== 00:07:30.841 Total 162048/s 633 MiB/s 0 0' 00:07:30.841 14:58:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.841 14:58:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.841 14:58:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:30.841 14:58:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:30.841 14:58:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.841 14:58:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.841 14:58:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.841 14:58:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.841 14:58:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.841 14:58:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.841 14:58:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.841 14:58:01 -- accel/accel.sh@42 -- # jq -r . 00:07:30.841 [2024-11-20 14:58:01.357500] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:30.841 [2024-11-20 14:58:01.357668] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68267 ] 00:07:30.841 [2024-11-20 14:58:01.499376] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.841 [2024-11-20 14:58:01.534031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.841 14:58:01 -- accel/accel.sh@21 -- # val= 00:07:30.841 14:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.841 14:58:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.841 14:58:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.841 14:58:01 -- accel/accel.sh@21 -- # val= 00:07:30.841 14:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.841 14:58:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.841 14:58:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.841 14:58:01 -- accel/accel.sh@21 -- # val=0x1 00:07:30.841 14:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.841 14:58:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.841 14:58:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.841 14:58:01 -- accel/accel.sh@21 -- # val= 00:07:30.841 14:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.841 14:58:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.841 14:58:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.841 14:58:01 -- accel/accel.sh@21 -- # val= 00:07:30.841 14:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.841 14:58:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.841 14:58:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.841 14:58:01 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:30.841 14:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.841 14:58:01 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:30.841 14:58:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.841 14:58:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.841 14:58:01 -- accel/accel.sh@21 -- # val=0 00:07:30.841 14:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.841 14:58:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.841 14:58:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.841 14:58:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:30.841 14:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.841 14:58:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.841 14:58:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.841 14:58:01 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:30.841 14:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.841 14:58:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.841 14:58:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.841 14:58:01 -- accel/accel.sh@21 -- # val= 00:07:30.841 14:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.841 14:58:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.841 14:58:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.841 14:58:01 -- accel/accel.sh@21 -- # val=software 00:07:30.841 14:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.841 14:58:01 -- accel/accel.sh@23 -- # accel_module=software 00:07:30.841 14:58:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.841 14:58:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.841 14:58:01 -- accel/accel.sh@21 -- # val=32 00:07:30.841 14:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.842 14:58:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.842 14:58:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.842 14:58:01 -- accel/accel.sh@21 -- # val=32 00:07:30.842 14:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.842 14:58:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.842 14:58:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.842 14:58:01 -- accel/accel.sh@21 -- # val=1 00:07:30.842 14:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.842 14:58:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.842 14:58:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.842 14:58:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:30.842 14:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.842 14:58:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.842 14:58:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.842 14:58:01 -- accel/accel.sh@21 -- # val=Yes 00:07:30.842 14:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.842 14:58:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.842 14:58:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.842 14:58:01 -- accel/accel.sh@21 -- # val= 00:07:30.842 14:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.842 14:58:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.842 14:58:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.842 14:58:01 -- accel/accel.sh@21 -- # val= 00:07:30.842 14:58:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.842 14:58:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.842 14:58:01 -- accel/accel.sh@20 -- # read -r var val 00:07:32.218 14:58:02 -- accel/accel.sh@21 -- # val= 00:07:32.218 14:58:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.218 14:58:02 -- accel/accel.sh@20 -- # IFS=: 00:07:32.218 14:58:02 -- accel/accel.sh@20 -- # read -r var val 00:07:32.218 14:58:02 -- accel/accel.sh@21 -- # val= 00:07:32.218 14:58:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.218 14:58:02 -- accel/accel.sh@20 -- # IFS=: 00:07:32.218 14:58:02 -- accel/accel.sh@20 -- # read -r var val 00:07:32.219 14:58:02 -- accel/accel.sh@21 -- # val= 00:07:32.219 14:58:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.219 14:58:02 -- accel/accel.sh@20 -- # IFS=: 00:07:32.219 14:58:02 -- accel/accel.sh@20 -- # read -r var val 00:07:32.219 14:58:02 -- accel/accel.sh@21 -- # val= 00:07:32.219 14:58:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.219 14:58:02 -- accel/accel.sh@20 -- # IFS=: 00:07:32.219 14:58:02 -- accel/accel.sh@20 -- # read -r var val 00:07:32.219 14:58:02 -- accel/accel.sh@21 -- # val= 00:07:32.219 14:58:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.219 14:58:02 -- accel/accel.sh@20 -- # IFS=: 00:07:32.219 14:58:02 -- accel/accel.sh@20 -- # read -r var val 00:07:32.219 14:58:02 -- accel/accel.sh@21 -- # val= 00:07:32.219 14:58:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.219 14:58:02 -- accel/accel.sh@20 -- # IFS=: 00:07:32.219 14:58:02 -- accel/accel.sh@20 -- # read -r var val 00:07:32.219 14:58:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:32.219 14:58:02 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:32.219 14:58:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.219 00:07:32.219 real 0m2.686s 00:07:32.219 user 0m2.296s 00:07:32.219 sys 0m0.180s 00:07:32.219 14:58:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:32.219 ************************************ 00:07:32.219 END TEST accel_copy_crc32c_C2 00:07:32.219 ************************************ 00:07:32.219 14:58:02 -- common/autotest_common.sh@10 -- # set +x 00:07:32.219 14:58:02 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:32.219 14:58:02 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:32.219 14:58:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.219 14:58:02 -- common/autotest_common.sh@10 -- # set +x 00:07:32.219 ************************************ 00:07:32.219 START TEST accel_dualcast 00:07:32.219 ************************************ 00:07:32.219 14:58:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:07:32.219 14:58:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:32.219 14:58:02 -- accel/accel.sh@17 -- # local accel_module 00:07:32.219 14:58:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:32.219 14:58:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:32.219 14:58:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.219 14:58:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.219 14:58:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.219 14:58:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.219 14:58:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.219 14:58:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.219 14:58:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.219 14:58:02 -- accel/accel.sh@42 -- # jq -r . 00:07:32.219 [2024-11-20 14:58:02.749038] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:32.219 [2024-11-20 14:58:02.749161] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68297 ] 00:07:32.219 [2024-11-20 14:58:02.886102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.219 [2024-11-20 14:58:02.921416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.593 14:58:04 -- accel/accel.sh@18 -- # out=' 00:07:33.593 SPDK Configuration: 00:07:33.593 Core mask: 0x1 00:07:33.593 00:07:33.593 Accel Perf Configuration: 00:07:33.593 Workload Type: dualcast 00:07:33.593 Transfer size: 4096 bytes 00:07:33.593 Vector count 1 00:07:33.593 Module: software 00:07:33.593 Queue depth: 32 00:07:33.593 Allocate depth: 32 00:07:33.593 # threads/core: 1 00:07:33.593 Run time: 1 seconds 00:07:33.593 Verify: Yes 00:07:33.593 00:07:33.593 Running for 1 seconds... 00:07:33.593 00:07:33.593 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:33.593 ------------------------------------------------------------------------------------ 00:07:33.593 0,0 331488/s 1294 MiB/s 0 0 00:07:33.593 ==================================================================================== 00:07:33.593 Total 331488/s 1294 MiB/s 0 0' 00:07:33.593 14:58:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:33.593 14:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.593 14:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.593 14:58:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:33.593 14:58:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.593 14:58:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.593 14:58:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.593 14:58:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.593 14:58:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.593 14:58:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.593 14:58:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.593 14:58:04 -- accel/accel.sh@42 -- # jq -r . 00:07:33.593 [2024-11-20 14:58:04.076824] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:33.593 [2024-11-20 14:58:04.076961] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68311 ] 00:07:33.593 [2024-11-20 14:58:04.214461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.593 [2024-11-20 14:58:04.255004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.593 14:58:04 -- accel/accel.sh@21 -- # val= 00:07:33.593 14:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.593 14:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.593 14:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.593 14:58:04 -- accel/accel.sh@21 -- # val= 00:07:33.593 14:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.593 14:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.593 14:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.593 14:58:04 -- accel/accel.sh@21 -- # val=0x1 00:07:33.593 14:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.593 14:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.593 14:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.593 14:58:04 -- accel/accel.sh@21 -- # val= 00:07:33.593 14:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.593 14:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.593 14:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.593 14:58:04 -- accel/accel.sh@21 -- # val= 00:07:33.593 14:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.593 14:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.593 14:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.593 14:58:04 -- accel/accel.sh@21 -- # val=dualcast 00:07:33.593 14:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.593 14:58:04 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:33.593 14:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.593 14:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.593 14:58:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:33.593 14:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.593 14:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.593 14:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.593 14:58:04 -- accel/accel.sh@21 -- # val= 00:07:33.593 14:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.593 14:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.593 14:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.593 14:58:04 -- accel/accel.sh@21 -- # val=software 00:07:33.593 14:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.593 14:58:04 -- accel/accel.sh@23 -- # accel_module=software 00:07:33.593 14:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.593 14:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.593 14:58:04 -- accel/accel.sh@21 -- # val=32 00:07:33.594 14:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.594 14:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.594 14:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.594 14:58:04 -- accel/accel.sh@21 -- # val=32 00:07:33.594 14:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.594 14:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.594 14:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.594 14:58:04 -- accel/accel.sh@21 -- # val=1 00:07:33.594 14:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.594 14:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.594 14:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.594 14:58:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:33.594 14:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.594 14:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.594 14:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.594 14:58:04 -- accel/accel.sh@21 -- # val=Yes 00:07:33.594 14:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.594 14:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.594 14:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.594 14:58:04 -- accel/accel.sh@21 -- # val= 00:07:33.594 14:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.594 14:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.594 14:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.594 14:58:04 -- accel/accel.sh@21 -- # val= 00:07:33.594 14:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.594 14:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.594 14:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:34.972 14:58:05 -- accel/accel.sh@21 -- # val= 00:07:34.972 14:58:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.972 14:58:05 -- accel/accel.sh@20 -- # IFS=: 00:07:34.972 14:58:05 -- accel/accel.sh@20 -- # read -r var val 00:07:34.972 14:58:05 -- accel/accel.sh@21 -- # val= 00:07:34.972 14:58:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.972 14:58:05 -- accel/accel.sh@20 -- # IFS=: 00:07:34.972 14:58:05 -- accel/accel.sh@20 -- # read -r var val 00:07:34.972 14:58:05 -- accel/accel.sh@21 -- # val= 00:07:34.972 14:58:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.972 14:58:05 -- accel/accel.sh@20 -- # IFS=: 00:07:34.972 14:58:05 -- accel/accel.sh@20 -- # read -r var val 00:07:34.972 14:58:05 -- accel/accel.sh@21 -- # val= 00:07:34.972 14:58:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.972 14:58:05 -- accel/accel.sh@20 -- # IFS=: 00:07:34.972 14:58:05 -- accel/accel.sh@20 -- # read -r var val 00:07:34.972 14:58:05 -- accel/accel.sh@21 -- # val= 00:07:34.972 14:58:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.972 14:58:05 -- accel/accel.sh@20 -- # IFS=: 00:07:34.972 14:58:05 -- accel/accel.sh@20 -- # read -r var val 00:07:34.972 14:58:05 -- accel/accel.sh@21 -- # val= 00:07:34.972 14:58:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.972 14:58:05 -- accel/accel.sh@20 -- # IFS=: 00:07:34.972 14:58:05 -- accel/accel.sh@20 -- # read -r var val 00:07:34.972 14:58:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:34.972 14:58:05 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:34.972 ************************************ 00:07:34.972 END TEST accel_dualcast 00:07:34.972 ************************************ 00:07:34.972 14:58:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.972 00:07:34.972 real 0m2.673s 00:07:34.972 user 0m2.301s 00:07:34.972 sys 0m0.164s 00:07:34.972 14:58:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:34.972 14:58:05 -- common/autotest_common.sh@10 -- # set +x 00:07:34.972 14:58:05 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:34.972 14:58:05 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:34.972 14:58:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.972 14:58:05 -- common/autotest_common.sh@10 -- # set +x 00:07:34.972 ************************************ 00:07:34.972 START TEST accel_compare 00:07:34.972 ************************************ 00:07:34.972 14:58:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:07:34.972 14:58:05 -- accel/accel.sh@16 -- # local accel_opc 00:07:34.972 14:58:05 -- accel/accel.sh@17 -- # local accel_module 00:07:34.972 14:58:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:34.972 14:58:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:34.972 14:58:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.972 14:58:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.972 14:58:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.972 14:58:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.972 14:58:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.972 14:58:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.972 14:58:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.972 14:58:05 -- accel/accel.sh@42 -- # jq -r . 00:07:34.972 [2024-11-20 14:58:05.472305] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:34.972 [2024-11-20 14:58:05.472597] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68351 ] 00:07:34.972 [2024-11-20 14:58:05.605711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.972 [2024-11-20 14:58:05.641141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.355 14:58:06 -- accel/accel.sh@18 -- # out=' 00:07:36.355 SPDK Configuration: 00:07:36.355 Core mask: 0x1 00:07:36.355 00:07:36.355 Accel Perf Configuration: 00:07:36.355 Workload Type: compare 00:07:36.355 Transfer size: 4096 bytes 00:07:36.355 Vector count 1 00:07:36.355 Module: software 00:07:36.355 Queue depth: 32 00:07:36.355 Allocate depth: 32 00:07:36.355 # threads/core: 1 00:07:36.355 Run time: 1 seconds 00:07:36.355 Verify: Yes 00:07:36.355 00:07:36.355 Running for 1 seconds... 00:07:36.355 00:07:36.355 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:36.355 ------------------------------------------------------------------------------------ 00:07:36.355 0,0 411744/s 1608 MiB/s 0 0 00:07:36.355 ==================================================================================== 00:07:36.355 Total 411744/s 1608 MiB/s 0 0' 00:07:36.355 14:58:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.355 14:58:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.355 14:58:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:36.355 14:58:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:36.355 14:58:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.355 14:58:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.355 14:58:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.355 14:58:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.355 14:58:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.355 14:58:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.355 14:58:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.355 14:58:06 -- accel/accel.sh@42 -- # jq -r . 00:07:36.355 [2024-11-20 14:58:06.797022] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:36.355 [2024-11-20 14:58:06.797156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68365 ] 00:07:36.355 [2024-11-20 14:58:06.928755] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.355 [2024-11-20 14:58:06.964513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.355 14:58:06 -- accel/accel.sh@21 -- # val= 00:07:36.355 14:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.355 14:58:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.355 14:58:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.355 14:58:06 -- accel/accel.sh@21 -- # val= 00:07:36.355 14:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.355 14:58:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.355 14:58:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.355 14:58:06 -- accel/accel.sh@21 -- # val=0x1 00:07:36.355 14:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.355 14:58:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.355 14:58:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.355 14:58:06 -- accel/accel.sh@21 -- # val= 00:07:36.355 14:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.355 14:58:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.355 14:58:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.355 14:58:06 -- accel/accel.sh@21 -- # val= 00:07:36.355 14:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.355 14:58:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.355 14:58:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.355 14:58:07 -- accel/accel.sh@21 -- # val=compare 00:07:36.355 14:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.355 14:58:07 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:36.355 14:58:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.355 14:58:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.355 14:58:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:36.355 14:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.355 14:58:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.355 14:58:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.355 14:58:07 -- accel/accel.sh@21 -- # val= 00:07:36.355 14:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.355 14:58:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.355 14:58:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.355 14:58:07 -- accel/accel.sh@21 -- # val=software 00:07:36.355 14:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.355 14:58:07 -- accel/accel.sh@23 -- # accel_module=software 00:07:36.355 14:58:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.355 14:58:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.355 14:58:07 -- accel/accel.sh@21 -- # val=32 00:07:36.355 14:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.355 14:58:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.355 14:58:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.355 14:58:07 -- accel/accel.sh@21 -- # val=32 00:07:36.355 14:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.355 14:58:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.355 14:58:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.355 14:58:07 -- accel/accel.sh@21 -- # val=1 00:07:36.355 14:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.355 14:58:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.355 14:58:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.355 14:58:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:36.355 14:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.355 14:58:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.355 14:58:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.355 14:58:07 -- accel/accel.sh@21 -- # val=Yes 00:07:36.355 14:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.355 14:58:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.355 14:58:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.355 14:58:07 -- accel/accel.sh@21 -- # val= 00:07:36.355 14:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.355 14:58:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.355 14:58:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.356 14:58:07 -- accel/accel.sh@21 -- # val= 00:07:36.356 14:58:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.356 14:58:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.356 14:58:07 -- accel/accel.sh@20 -- # read -r var val 00:07:37.729 14:58:08 -- accel/accel.sh@21 -- # val= 00:07:37.729 14:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.729 14:58:08 -- accel/accel.sh@20 -- # IFS=: 00:07:37.730 14:58:08 -- accel/accel.sh@20 -- # read -r var val 00:07:37.730 14:58:08 -- accel/accel.sh@21 -- # val= 00:07:37.730 14:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.730 14:58:08 -- accel/accel.sh@20 -- # IFS=: 00:07:37.730 14:58:08 -- accel/accel.sh@20 -- # read -r var val 00:07:37.730 14:58:08 -- accel/accel.sh@21 -- # val= 00:07:37.730 14:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.730 14:58:08 -- accel/accel.sh@20 -- # IFS=: 00:07:37.730 14:58:08 -- accel/accel.sh@20 -- # read -r var val 00:07:37.730 14:58:08 -- accel/accel.sh@21 -- # val= 00:07:37.730 14:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.730 14:58:08 -- accel/accel.sh@20 -- # IFS=: 00:07:37.730 14:58:08 -- accel/accel.sh@20 -- # read -r var val 00:07:37.730 14:58:08 -- accel/accel.sh@21 -- # val= 00:07:37.730 14:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.730 14:58:08 -- accel/accel.sh@20 -- # IFS=: 00:07:37.730 14:58:08 -- accel/accel.sh@20 -- # read -r var val 00:07:37.730 14:58:08 -- accel/accel.sh@21 -- # val= 00:07:37.730 14:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.730 14:58:08 -- accel/accel.sh@20 -- # IFS=: 00:07:37.730 14:58:08 -- accel/accel.sh@20 -- # read -r var val 00:07:37.730 14:58:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:37.730 14:58:08 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:37.730 14:58:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.730 00:07:37.730 real 0m2.656s 00:07:37.730 user 0m2.306s 00:07:37.730 sys 0m0.145s 00:07:37.730 14:58:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:37.730 14:58:08 -- common/autotest_common.sh@10 -- # set +x 00:07:37.730 ************************************ 00:07:37.730 END TEST accel_compare 00:07:37.730 ************************************ 00:07:37.730 14:58:08 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:37.730 14:58:08 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:37.730 14:58:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.730 14:58:08 -- common/autotest_common.sh@10 -- # set +x 00:07:37.730 ************************************ 00:07:37.730 START TEST accel_xor 00:07:37.730 ************************************ 00:07:37.730 14:58:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:07:37.730 14:58:08 -- accel/accel.sh@16 -- # local accel_opc 00:07:37.730 14:58:08 -- accel/accel.sh@17 -- # local accel_module 00:07:37.730 14:58:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:37.730 14:58:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:37.730 14:58:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.730 14:58:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.730 14:58:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.730 14:58:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.730 14:58:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.730 14:58:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.730 14:58:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.730 14:58:08 -- accel/accel.sh@42 -- # jq -r . 00:07:37.730 [2024-11-20 14:58:08.165711] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:37.730 [2024-11-20 14:58:08.165814] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68394 ] 00:07:37.730 [2024-11-20 14:58:08.299528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.730 [2024-11-20 14:58:08.340354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.701 14:58:09 -- accel/accel.sh@18 -- # out=' 00:07:38.701 SPDK Configuration: 00:07:38.701 Core mask: 0x1 00:07:38.701 00:07:38.701 Accel Perf Configuration: 00:07:38.701 Workload Type: xor 00:07:38.701 Source buffers: 2 00:07:38.701 Transfer size: 4096 bytes 00:07:38.701 Vector count 1 00:07:38.701 Module: software 00:07:38.701 Queue depth: 32 00:07:38.701 Allocate depth: 32 00:07:38.701 # threads/core: 1 00:07:38.701 Run time: 1 seconds 00:07:38.701 Verify: Yes 00:07:38.701 00:07:38.701 Running for 1 seconds... 00:07:38.701 00:07:38.701 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:38.701 ------------------------------------------------------------------------------------ 00:07:38.701 0,0 229376/s 896 MiB/s 0 0 00:07:38.701 ==================================================================================== 00:07:38.701 Total 229376/s 896 MiB/s 0 0' 00:07:38.701 14:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.701 14:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.701 14:58:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:38.701 14:58:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:38.701 14:58:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.701 14:58:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.701 14:58:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.701 14:58:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.701 14:58:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.701 14:58:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.701 14:58:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.701 14:58:09 -- accel/accel.sh@42 -- # jq -r . 00:07:38.701 [2024-11-20 14:58:09.501227] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:38.701 [2024-11-20 14:58:09.501317] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68419 ] 00:07:38.960 [2024-11-20 14:58:09.634350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.960 [2024-11-20 14:58:09.675731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.960 14:58:09 -- accel/accel.sh@21 -- # val= 00:07:38.960 14:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.960 14:58:09 -- accel/accel.sh@21 -- # val= 00:07:38.960 14:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.960 14:58:09 -- accel/accel.sh@21 -- # val=0x1 00:07:38.960 14:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.960 14:58:09 -- accel/accel.sh@21 -- # val= 00:07:38.960 14:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.960 14:58:09 -- accel/accel.sh@21 -- # val= 00:07:38.960 14:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.960 14:58:09 -- accel/accel.sh@21 -- # val=xor 00:07:38.960 14:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.960 14:58:09 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.960 14:58:09 -- accel/accel.sh@21 -- # val=2 00:07:38.960 14:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.960 14:58:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:38.960 14:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.960 14:58:09 -- accel/accel.sh@21 -- # val= 00:07:38.960 14:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.960 14:58:09 -- accel/accel.sh@21 -- # val=software 00:07:38.960 14:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.960 14:58:09 -- accel/accel.sh@23 -- # accel_module=software 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.960 14:58:09 -- accel/accel.sh@21 -- # val=32 00:07:38.960 14:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.960 14:58:09 -- accel/accel.sh@21 -- # val=32 00:07:38.960 14:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.960 14:58:09 -- accel/accel.sh@21 -- # val=1 00:07:38.960 14:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.960 14:58:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:38.960 14:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.960 14:58:09 -- accel/accel.sh@21 -- # val=Yes 00:07:38.960 14:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.960 14:58:09 -- accel/accel.sh@21 -- # val= 00:07:38.960 14:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.960 14:58:09 -- accel/accel.sh@21 -- # val= 00:07:38.960 14:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.960 14:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:40.336 14:58:10 -- accel/accel.sh@21 -- # val= 00:07:40.336 14:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.336 14:58:10 -- accel/accel.sh@20 -- # IFS=: 00:07:40.336 14:58:10 -- accel/accel.sh@20 -- # read -r var val 00:07:40.336 14:58:10 -- accel/accel.sh@21 -- # val= 00:07:40.336 14:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.336 14:58:10 -- accel/accel.sh@20 -- # IFS=: 00:07:40.336 14:58:10 -- accel/accel.sh@20 -- # read -r var val 00:07:40.336 14:58:10 -- accel/accel.sh@21 -- # val= 00:07:40.336 14:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.336 14:58:10 -- accel/accel.sh@20 -- # IFS=: 00:07:40.336 14:58:10 -- accel/accel.sh@20 -- # read -r var val 00:07:40.336 14:58:10 -- accel/accel.sh@21 -- # val= 00:07:40.336 14:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.336 14:58:10 -- accel/accel.sh@20 -- # IFS=: 00:07:40.336 14:58:10 -- accel/accel.sh@20 -- # read -r var val 00:07:40.336 14:58:10 -- accel/accel.sh@21 -- # val= 00:07:40.336 14:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.336 14:58:10 -- accel/accel.sh@20 -- # IFS=: 00:07:40.336 14:58:10 -- accel/accel.sh@20 -- # read -r var val 00:07:40.336 14:58:10 -- accel/accel.sh@21 -- # val= 00:07:40.336 14:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.336 14:58:10 -- accel/accel.sh@20 -- # IFS=: 00:07:40.336 14:58:10 -- accel/accel.sh@20 -- # read -r var val 00:07:40.336 14:58:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:40.336 14:58:10 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:40.336 14:58:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.336 00:07:40.336 real 0m2.670s 00:07:40.336 user 0m2.298s 00:07:40.336 sys 0m0.162s 00:07:40.336 14:58:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.336 14:58:10 -- common/autotest_common.sh@10 -- # set +x 00:07:40.336 ************************************ 00:07:40.336 END TEST accel_xor 00:07:40.336 ************************************ 00:07:40.336 14:58:10 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:40.336 14:58:10 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:40.336 14:58:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.336 14:58:10 -- common/autotest_common.sh@10 -- # set +x 00:07:40.336 ************************************ 00:07:40.336 START TEST accel_xor 00:07:40.336 ************************************ 00:07:40.336 14:58:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:07:40.336 14:58:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.336 14:58:10 -- accel/accel.sh@17 -- # local accel_module 00:07:40.336 14:58:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:40.336 14:58:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:40.336 14:58:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.336 14:58:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.336 14:58:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.336 14:58:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.336 14:58:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.336 14:58:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.336 14:58:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.336 14:58:10 -- accel/accel.sh@42 -- # jq -r . 00:07:40.336 [2024-11-20 14:58:10.882292] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:40.336 [2024-11-20 14:58:10.882441] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68448 ] 00:07:40.336 [2024-11-20 14:58:11.024269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.336 [2024-11-20 14:58:11.067812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.714 14:58:12 -- accel/accel.sh@18 -- # out=' 00:07:41.714 SPDK Configuration: 00:07:41.714 Core mask: 0x1 00:07:41.714 00:07:41.714 Accel Perf Configuration: 00:07:41.714 Workload Type: xor 00:07:41.714 Source buffers: 3 00:07:41.714 Transfer size: 4096 bytes 00:07:41.714 Vector count 1 00:07:41.714 Module: software 00:07:41.714 Queue depth: 32 00:07:41.714 Allocate depth: 32 00:07:41.714 # threads/core: 1 00:07:41.714 Run time: 1 seconds 00:07:41.714 Verify: Yes 00:07:41.714 00:07:41.714 Running for 1 seconds... 00:07:41.714 00:07:41.714 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:41.714 ------------------------------------------------------------------------------------ 00:07:41.714 0,0 208352/s 813 MiB/s 0 0 00:07:41.714 ==================================================================================== 00:07:41.714 Total 208352/s 813 MiB/s 0 0' 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.714 14:58:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:41.714 14:58:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:41.714 14:58:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.714 14:58:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.714 14:58:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.714 14:58:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.714 14:58:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.714 14:58:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.714 14:58:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.714 14:58:12 -- accel/accel.sh@42 -- # jq -r . 00:07:41.714 [2024-11-20 14:58:12.235910] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:41.714 [2024-11-20 14:58:12.236051] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68462 ] 00:07:41.714 [2024-11-20 14:58:12.371690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.714 [2024-11-20 14:58:12.408047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.714 14:58:12 -- accel/accel.sh@21 -- # val= 00:07:41.714 14:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.714 14:58:12 -- accel/accel.sh@21 -- # val= 00:07:41.714 14:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.714 14:58:12 -- accel/accel.sh@21 -- # val=0x1 00:07:41.714 14:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.714 14:58:12 -- accel/accel.sh@21 -- # val= 00:07:41.714 14:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.714 14:58:12 -- accel/accel.sh@21 -- # val= 00:07:41.714 14:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.714 14:58:12 -- accel/accel.sh@21 -- # val=xor 00:07:41.714 14:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.714 14:58:12 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.714 14:58:12 -- accel/accel.sh@21 -- # val=3 00:07:41.714 14:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.714 14:58:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:41.714 14:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.714 14:58:12 -- accel/accel.sh@21 -- # val= 00:07:41.714 14:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.714 14:58:12 -- accel/accel.sh@21 -- # val=software 00:07:41.714 14:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.714 14:58:12 -- accel/accel.sh@23 -- # accel_module=software 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.714 14:58:12 -- accel/accel.sh@21 -- # val=32 00:07:41.714 14:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.714 14:58:12 -- accel/accel.sh@21 -- # val=32 00:07:41.714 14:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.714 14:58:12 -- accel/accel.sh@21 -- # val=1 00:07:41.714 14:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.714 14:58:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:41.714 14:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.714 14:58:12 -- accel/accel.sh@21 -- # val=Yes 00:07:41.714 14:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.714 14:58:12 -- accel/accel.sh@21 -- # val= 00:07:41.714 14:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.714 14:58:12 -- accel/accel.sh@21 -- # val= 00:07:41.714 14:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.714 14:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:43.120 14:58:13 -- accel/accel.sh@21 -- # val= 00:07:43.120 14:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.120 14:58:13 -- accel/accel.sh@20 -- # IFS=: 00:07:43.120 14:58:13 -- accel/accel.sh@20 -- # read -r var val 00:07:43.120 14:58:13 -- accel/accel.sh@21 -- # val= 00:07:43.120 14:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.120 14:58:13 -- accel/accel.sh@20 -- # IFS=: 00:07:43.120 14:58:13 -- accel/accel.sh@20 -- # read -r var val 00:07:43.120 14:58:13 -- accel/accel.sh@21 -- # val= 00:07:43.120 14:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.120 14:58:13 -- accel/accel.sh@20 -- # IFS=: 00:07:43.120 14:58:13 -- accel/accel.sh@20 -- # read -r var val 00:07:43.120 14:58:13 -- accel/accel.sh@21 -- # val= 00:07:43.120 14:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.120 14:58:13 -- accel/accel.sh@20 -- # IFS=: 00:07:43.120 14:58:13 -- accel/accel.sh@20 -- # read -r var val 00:07:43.120 14:58:13 -- accel/accel.sh@21 -- # val= 00:07:43.120 14:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.120 14:58:13 -- accel/accel.sh@20 -- # IFS=: 00:07:43.120 14:58:13 -- accel/accel.sh@20 -- # read -r var val 00:07:43.120 14:58:13 -- accel/accel.sh@21 -- # val= 00:07:43.120 14:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.120 14:58:13 -- accel/accel.sh@20 -- # IFS=: 00:07:43.120 14:58:13 -- accel/accel.sh@20 -- # read -r var val 00:07:43.120 14:58:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:43.120 14:58:13 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:43.120 14:58:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.120 00:07:43.120 real 0m2.695s 00:07:43.120 user 0m2.321s 00:07:43.120 sys 0m0.161s 00:07:43.120 14:58:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:43.120 ************************************ 00:07:43.120 END TEST accel_xor 00:07:43.120 ************************************ 00:07:43.120 14:58:13 -- common/autotest_common.sh@10 -- # set +x 00:07:43.120 14:58:13 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:43.120 14:58:13 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:43.120 14:58:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.120 14:58:13 -- common/autotest_common.sh@10 -- # set +x 00:07:43.120 ************************************ 00:07:43.120 START TEST accel_dif_verify 00:07:43.120 ************************************ 00:07:43.120 14:58:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:07:43.120 14:58:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.120 14:58:13 -- accel/accel.sh@17 -- # local accel_module 00:07:43.120 14:58:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:43.120 14:58:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:43.120 14:58:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.120 14:58:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.120 14:58:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.120 14:58:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.120 14:58:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.120 14:58:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.120 14:58:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.120 14:58:13 -- accel/accel.sh@42 -- # jq -r . 00:07:43.120 [2024-11-20 14:58:13.618714] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:43.120 [2024-11-20 14:58:13.618862] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68502 ] 00:07:43.120 [2024-11-20 14:58:13.757687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.120 [2024-11-20 14:58:13.793958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.493 14:58:14 -- accel/accel.sh@18 -- # out=' 00:07:44.493 SPDK Configuration: 00:07:44.493 Core mask: 0x1 00:07:44.493 00:07:44.493 Accel Perf Configuration: 00:07:44.493 Workload Type: dif_verify 00:07:44.493 Vector size: 4096 bytes 00:07:44.493 Transfer size: 4096 bytes 00:07:44.493 Block size: 512 bytes 00:07:44.493 Metadata size: 8 bytes 00:07:44.493 Vector count 1 00:07:44.493 Module: software 00:07:44.493 Queue depth: 32 00:07:44.493 Allocate depth: 32 00:07:44.493 # threads/core: 1 00:07:44.493 Run time: 1 seconds 00:07:44.493 Verify: No 00:07:44.493 00:07:44.493 Running for 1 seconds... 00:07:44.493 00:07:44.493 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:44.493 ------------------------------------------------------------------------------------ 00:07:44.493 0,0 85728/s 340 MiB/s 0 0 00:07:44.493 ==================================================================================== 00:07:44.493 Total 85728/s 334 MiB/s 0 0' 00:07:44.493 14:58:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.493 14:58:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:44.493 14:58:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.493 14:58:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:44.493 14:58:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.493 14:58:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.493 14:58:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.493 14:58:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.493 14:58:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.493 14:58:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.493 14:58:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.493 14:58:14 -- accel/accel.sh@42 -- # jq -r . 00:07:44.493 [2024-11-20 14:58:14.960245] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:44.493 [2024-11-20 14:58:14.960379] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68516 ] 00:07:44.493 [2024-11-20 14:58:15.098432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.493 [2024-11-20 14:58:15.133822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.493 14:58:15 -- accel/accel.sh@21 -- # val= 00:07:44.493 14:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.493 14:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.493 14:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.493 14:58:15 -- accel/accel.sh@21 -- # val= 00:07:44.493 14:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.493 14:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.493 14:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.493 14:58:15 -- accel/accel.sh@21 -- # val=0x1 00:07:44.493 14:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.493 14:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.493 14:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.493 14:58:15 -- accel/accel.sh@21 -- # val= 00:07:44.493 14:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.493 14:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.493 14:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.493 14:58:15 -- accel/accel.sh@21 -- # val= 00:07:44.493 14:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.493 14:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.493 14:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.493 14:58:15 -- accel/accel.sh@21 -- # val=dif_verify 00:07:44.493 14:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.493 14:58:15 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:44.493 14:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.493 14:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.493 14:58:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:44.493 14:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.494 14:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.494 14:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.494 14:58:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:44.494 14:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.494 14:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.494 14:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.494 14:58:15 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:44.494 14:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.494 14:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.494 14:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.494 14:58:15 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:44.494 14:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.494 14:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.494 14:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.494 14:58:15 -- accel/accel.sh@21 -- # val= 00:07:44.494 14:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.494 14:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.494 14:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.494 14:58:15 -- accel/accel.sh@21 -- # val=software 00:07:44.494 14:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.494 14:58:15 -- accel/accel.sh@23 -- # accel_module=software 00:07:44.494 14:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.494 14:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.494 14:58:15 -- accel/accel.sh@21 -- # val=32 00:07:44.494 14:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.494 14:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.494 14:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.494 14:58:15 -- accel/accel.sh@21 -- # val=32 00:07:44.494 14:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.494 14:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.494 14:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.494 14:58:15 -- accel/accel.sh@21 -- # val=1 00:07:44.494 14:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.494 14:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.494 14:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.494 14:58:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:44.494 14:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.494 14:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.494 14:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.494 14:58:15 -- accel/accel.sh@21 -- # val=No 00:07:44.494 14:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.494 14:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.494 14:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.494 14:58:15 -- accel/accel.sh@21 -- # val= 00:07:44.494 14:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.494 14:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.494 14:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:44.494 14:58:15 -- accel/accel.sh@21 -- # val= 00:07:44.494 14:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.494 14:58:15 -- accel/accel.sh@20 -- # IFS=: 00:07:44.494 14:58:15 -- accel/accel.sh@20 -- # read -r var val 00:07:45.869 14:58:16 -- accel/accel.sh@21 -- # val= 00:07:45.869 14:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.869 14:58:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.869 14:58:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.869 14:58:16 -- accel/accel.sh@21 -- # val= 00:07:45.869 14:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.869 14:58:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.869 14:58:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.869 14:58:16 -- accel/accel.sh@21 -- # val= 00:07:45.869 14:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.869 14:58:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.869 14:58:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.869 14:58:16 -- accel/accel.sh@21 -- # val= 00:07:45.869 14:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.869 14:58:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.869 14:58:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.869 14:58:16 -- accel/accel.sh@21 -- # val= 00:07:45.869 14:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.869 14:58:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.869 14:58:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.869 14:58:16 -- accel/accel.sh@21 -- # val= 00:07:45.869 14:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.869 14:58:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.869 14:58:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.869 14:58:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:45.869 14:58:16 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:45.869 14:58:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.869 00:07:45.869 real 0m2.694s 00:07:45.869 user 0m2.293s 00:07:45.869 sys 0m0.180s 00:07:45.869 ************************************ 00:07:45.869 END TEST accel_dif_verify 00:07:45.869 ************************************ 00:07:45.869 14:58:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:45.869 14:58:16 -- common/autotest_common.sh@10 -- # set +x 00:07:45.869 14:58:16 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:45.869 14:58:16 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:45.869 14:58:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.869 14:58:16 -- common/autotest_common.sh@10 -- # set +x 00:07:45.869 ************************************ 00:07:45.869 START TEST accel_dif_generate 00:07:45.869 ************************************ 00:07:45.869 14:58:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:07:45.869 14:58:16 -- accel/accel.sh@16 -- # local accel_opc 00:07:45.869 14:58:16 -- accel/accel.sh@17 -- # local accel_module 00:07:45.869 14:58:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:45.870 14:58:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:45.870 14:58:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.870 14:58:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.870 14:58:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.870 14:58:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.870 14:58:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.870 14:58:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.870 14:58:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.870 14:58:16 -- accel/accel.sh@42 -- # jq -r . 00:07:45.870 [2024-11-20 14:58:16.354919] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:45.870 [2024-11-20 14:58:16.355052] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68545 ] 00:07:45.870 [2024-11-20 14:58:16.494170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.870 [2024-11-20 14:58:16.530340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.247 14:58:17 -- accel/accel.sh@18 -- # out=' 00:07:47.247 SPDK Configuration: 00:07:47.247 Core mask: 0x1 00:07:47.247 00:07:47.247 Accel Perf Configuration: 00:07:47.247 Workload Type: dif_generate 00:07:47.247 Vector size: 4096 bytes 00:07:47.247 Transfer size: 4096 bytes 00:07:47.247 Block size: 512 bytes 00:07:47.247 Metadata size: 8 bytes 00:07:47.247 Vector count 1 00:07:47.247 Module: software 00:07:47.247 Queue depth: 32 00:07:47.247 Allocate depth: 32 00:07:47.247 # threads/core: 1 00:07:47.247 Run time: 1 seconds 00:07:47.247 Verify: No 00:07:47.247 00:07:47.247 Running for 1 seconds... 00:07:47.247 00:07:47.247 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:47.247 ------------------------------------------------------------------------------------ 00:07:47.247 0,0 114336/s 453 MiB/s 0 0 00:07:47.247 ==================================================================================== 00:07:47.247 Total 114336/s 446 MiB/s 0 0' 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.247 14:58:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.247 14:58:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:47.247 14:58:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.247 14:58:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.247 14:58:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.247 14:58:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.247 14:58:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.247 14:58:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.247 14:58:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.247 14:58:17 -- accel/accel.sh@42 -- # jq -r . 00:07:47.247 [2024-11-20 14:58:17.685937] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:47.247 [2024-11-20 14:58:17.686030] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68565 ] 00:07:47.247 [2024-11-20 14:58:17.815817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.247 [2024-11-20 14:58:17.858054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.247 14:58:17 -- accel/accel.sh@21 -- # val= 00:07:47.247 14:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.247 14:58:17 -- accel/accel.sh@21 -- # val= 00:07:47.247 14:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.247 14:58:17 -- accel/accel.sh@21 -- # val=0x1 00:07:47.247 14:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.247 14:58:17 -- accel/accel.sh@21 -- # val= 00:07:47.247 14:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.247 14:58:17 -- accel/accel.sh@21 -- # val= 00:07:47.247 14:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.247 14:58:17 -- accel/accel.sh@21 -- # val=dif_generate 00:07:47.247 14:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.247 14:58:17 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.247 14:58:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:47.247 14:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.247 14:58:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:47.247 14:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.247 14:58:17 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:47.247 14:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.247 14:58:17 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:47.247 14:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.247 14:58:17 -- accel/accel.sh@21 -- # val= 00:07:47.247 14:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.247 14:58:17 -- accel/accel.sh@21 -- # val=software 00:07:47.247 14:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.247 14:58:17 -- accel/accel.sh@23 -- # accel_module=software 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.247 14:58:17 -- accel/accel.sh@21 -- # val=32 00:07:47.247 14:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.247 14:58:17 -- accel/accel.sh@21 -- # val=32 00:07:47.247 14:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.247 14:58:17 -- accel/accel.sh@21 -- # val=1 00:07:47.247 14:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.247 14:58:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:47.247 14:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.247 14:58:17 -- accel/accel.sh@21 -- # val=No 00:07:47.247 14:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.247 14:58:17 -- accel/accel.sh@21 -- # val= 00:07:47.247 14:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.247 14:58:17 -- accel/accel.sh@21 -- # val= 00:07:47.247 14:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.247 14:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:48.623 14:58:19 -- accel/accel.sh@21 -- # val= 00:07:48.623 14:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.623 14:58:19 -- accel/accel.sh@20 -- # IFS=: 00:07:48.624 14:58:19 -- accel/accel.sh@20 -- # read -r var val 00:07:48.624 14:58:19 -- accel/accel.sh@21 -- # val= 00:07:48.624 14:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.624 14:58:19 -- accel/accel.sh@20 -- # IFS=: 00:07:48.624 14:58:19 -- accel/accel.sh@20 -- # read -r var val 00:07:48.624 14:58:19 -- accel/accel.sh@21 -- # val= 00:07:48.624 14:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.624 14:58:19 -- accel/accel.sh@20 -- # IFS=: 00:07:48.624 14:58:19 -- accel/accel.sh@20 -- # read -r var val 00:07:48.624 14:58:19 -- accel/accel.sh@21 -- # val= 00:07:48.624 14:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.624 14:58:19 -- accel/accel.sh@20 -- # IFS=: 00:07:48.624 14:58:19 -- accel/accel.sh@20 -- # read -r var val 00:07:48.624 14:58:19 -- accel/accel.sh@21 -- # val= 00:07:48.624 14:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.624 14:58:19 -- accel/accel.sh@20 -- # IFS=: 00:07:48.624 14:58:19 -- accel/accel.sh@20 -- # read -r var val 00:07:48.624 14:58:19 -- accel/accel.sh@21 -- # val= 00:07:48.624 14:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.624 14:58:19 -- accel/accel.sh@20 -- # IFS=: 00:07:48.624 14:58:19 -- accel/accel.sh@20 -- # read -r var val 00:07:48.624 14:58:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:48.624 14:58:19 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:48.624 14:58:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.624 00:07:48.624 real 0m2.676s 00:07:48.624 user 0m2.299s 00:07:48.624 sys 0m0.167s 00:07:48.624 14:58:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:48.624 ************************************ 00:07:48.624 END TEST accel_dif_generate 00:07:48.624 ************************************ 00:07:48.624 14:58:19 -- common/autotest_common.sh@10 -- # set +x 00:07:48.624 14:58:19 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:48.624 14:58:19 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:48.624 14:58:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.624 14:58:19 -- common/autotest_common.sh@10 -- # set +x 00:07:48.624 ************************************ 00:07:48.624 START TEST accel_dif_generate_copy 00:07:48.624 ************************************ 00:07:48.624 14:58:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:48.624 14:58:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:48.624 14:58:19 -- accel/accel.sh@17 -- # local accel_module 00:07:48.624 14:58:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:48.624 14:58:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:48.624 14:58:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:48.624 14:58:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:48.624 14:58:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.624 14:58:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.624 14:58:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:48.624 14:58:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:48.624 14:58:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:48.624 14:58:19 -- accel/accel.sh@42 -- # jq -r . 00:07:48.624 [2024-11-20 14:58:19.085030] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:48.624 [2024-11-20 14:58:19.085171] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68599 ] 00:07:48.624 [2024-11-20 14:58:19.227951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.625 [2024-11-20 14:58:19.264828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.001 14:58:20 -- accel/accel.sh@18 -- # out=' 00:07:50.001 SPDK Configuration: 00:07:50.001 Core mask: 0x1 00:07:50.001 00:07:50.001 Accel Perf Configuration: 00:07:50.001 Workload Type: dif_generate_copy 00:07:50.001 Vector size: 4096 bytes 00:07:50.001 Transfer size: 4096 bytes 00:07:50.001 Vector count 1 00:07:50.001 Module: software 00:07:50.001 Queue depth: 32 00:07:50.001 Allocate depth: 32 00:07:50.001 # threads/core: 1 00:07:50.001 Run time: 1 seconds 00:07:50.001 Verify: No 00:07:50.001 00:07:50.001 Running for 1 seconds... 00:07:50.001 00:07:50.001 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:50.001 ------------------------------------------------------------------------------------ 00:07:50.001 0,0 82560/s 327 MiB/s 0 0 00:07:50.001 ==================================================================================== 00:07:50.001 Total 82560/s 322 MiB/s 0 0' 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:50.001 14:58:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:50.001 14:58:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:50.001 14:58:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.001 14:58:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.001 14:58:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.001 14:58:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.001 14:58:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.001 14:58:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.001 14:58:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.001 14:58:20 -- accel/accel.sh@42 -- # jq -r . 00:07:50.001 [2024-11-20 14:58:20.430123] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:50.001 [2024-11-20 14:58:20.430256] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68613 ] 00:07:50.001 [2024-11-20 14:58:20.568737] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.001 [2024-11-20 14:58:20.611890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.001 14:58:20 -- accel/accel.sh@21 -- # val= 00:07:50.001 14:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:50.001 14:58:20 -- accel/accel.sh@21 -- # val= 00:07:50.001 14:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:50.001 14:58:20 -- accel/accel.sh@21 -- # val=0x1 00:07:50.001 14:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:50.001 14:58:20 -- accel/accel.sh@21 -- # val= 00:07:50.001 14:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:50.001 14:58:20 -- accel/accel.sh@21 -- # val= 00:07:50.001 14:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:50.001 14:58:20 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:50.001 14:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.001 14:58:20 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:50.001 14:58:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:50.001 14:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:50.001 14:58:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:50.001 14:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:50.001 14:58:20 -- accel/accel.sh@21 -- # val= 00:07:50.001 14:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:50.001 14:58:20 -- accel/accel.sh@21 -- # val=software 00:07:50.001 14:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.001 14:58:20 -- accel/accel.sh@23 -- # accel_module=software 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:50.001 14:58:20 -- accel/accel.sh@21 -- # val=32 00:07:50.001 14:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:50.001 14:58:20 -- accel/accel.sh@21 -- # val=32 00:07:50.001 14:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:50.001 14:58:20 -- accel/accel.sh@21 -- # val=1 00:07:50.001 14:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.001 14:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:50.002 14:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:50.002 14:58:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:50.002 14:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.002 14:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:50.002 14:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:50.002 14:58:20 -- accel/accel.sh@21 -- # val=No 00:07:50.002 14:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.002 14:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:50.002 14:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:50.002 14:58:20 -- accel/accel.sh@21 -- # val= 00:07:50.002 14:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.002 14:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:50.002 14:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:50.002 14:58:20 -- accel/accel.sh@21 -- # val= 00:07:50.002 14:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.002 14:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:50.002 14:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:51.377 14:58:21 -- accel/accel.sh@21 -- # val= 00:07:51.377 14:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.377 14:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:51.377 14:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:51.377 14:58:21 -- accel/accel.sh@21 -- # val= 00:07:51.377 14:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.377 14:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:51.377 14:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:51.377 14:58:21 -- accel/accel.sh@21 -- # val= 00:07:51.377 14:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.377 14:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:51.377 14:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:51.377 14:58:21 -- accel/accel.sh@21 -- # val= 00:07:51.377 14:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.377 14:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:51.377 14:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:51.377 14:58:21 -- accel/accel.sh@21 -- # val= 00:07:51.377 14:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.377 14:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:51.377 14:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:51.377 14:58:21 -- accel/accel.sh@21 -- # val= 00:07:51.377 14:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.377 14:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:51.377 14:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:51.377 ************************************ 00:07:51.377 END TEST accel_dif_generate_copy 00:07:51.377 ************************************ 00:07:51.377 14:58:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:51.377 14:58:21 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:51.377 14:58:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.377 00:07:51.377 real 0m2.702s 00:07:51.377 user 0m2.301s 00:07:51.377 sys 0m0.182s 00:07:51.377 14:58:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:51.377 14:58:21 -- common/autotest_common.sh@10 -- # set +x 00:07:51.377 14:58:21 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:51.377 14:58:21 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:51.377 14:58:21 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:51.377 14:58:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.377 14:58:21 -- common/autotest_common.sh@10 -- # set +x 00:07:51.377 ************************************ 00:07:51.377 START TEST accel_comp 00:07:51.377 ************************************ 00:07:51.377 14:58:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:51.377 14:58:21 -- accel/accel.sh@16 -- # local accel_opc 00:07:51.377 14:58:21 -- accel/accel.sh@17 -- # local accel_module 00:07:51.377 14:58:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:51.377 14:58:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:51.377 14:58:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.377 14:58:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:51.377 14:58:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.377 14:58:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.377 14:58:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:51.377 14:58:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:51.377 14:58:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:51.377 14:58:21 -- accel/accel.sh@42 -- # jq -r . 00:07:51.377 [2024-11-20 14:58:21.825463] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:51.377 [2024-11-20 14:58:21.825560] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68648 ] 00:07:51.377 [2024-11-20 14:58:21.958882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.377 [2024-11-20 14:58:22.003245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.755 14:58:23 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:52.755 00:07:52.755 SPDK Configuration: 00:07:52.755 Core mask: 0x1 00:07:52.755 00:07:52.755 Accel Perf Configuration: 00:07:52.755 Workload Type: compress 00:07:52.755 Transfer size: 4096 bytes 00:07:52.755 Vector count 1 00:07:52.755 Module: software 00:07:52.755 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:52.755 Queue depth: 32 00:07:52.755 Allocate depth: 32 00:07:52.755 # threads/core: 1 00:07:52.755 Run time: 1 seconds 00:07:52.755 Verify: No 00:07:52.755 00:07:52.755 Running for 1 seconds... 00:07:52.755 00:07:52.755 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:52.755 ------------------------------------------------------------------------------------ 00:07:52.755 0,0 39680/s 165 MiB/s 0 0 00:07:52.755 ==================================================================================== 00:07:52.755 Total 39680/s 155 MiB/s 0 0' 00:07:52.755 14:58:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:52.755 14:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:52.755 14:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:52.755 14:58:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:52.755 14:58:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.755 14:58:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:52.755 14:58:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.755 14:58:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.755 14:58:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:52.755 14:58:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:52.755 14:58:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:52.755 14:58:23 -- accel/accel.sh@42 -- # jq -r . 00:07:52.755 [2024-11-20 14:58:23.194558] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:52.756 [2024-11-20 14:58:23.194703] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68667 ] 00:07:52.756 [2024-11-20 14:58:23.329348] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.756 [2024-11-20 14:58:23.369427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.756 14:58:23 -- accel/accel.sh@21 -- # val= 00:07:52.756 14:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:52.756 14:58:23 -- accel/accel.sh@21 -- # val= 00:07:52.756 14:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:52.756 14:58:23 -- accel/accel.sh@21 -- # val= 00:07:52.756 14:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:52.756 14:58:23 -- accel/accel.sh@21 -- # val=0x1 00:07:52.756 14:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:52.756 14:58:23 -- accel/accel.sh@21 -- # val= 00:07:52.756 14:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:52.756 14:58:23 -- accel/accel.sh@21 -- # val= 00:07:52.756 14:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:52.756 14:58:23 -- accel/accel.sh@21 -- # val=compress 00:07:52.756 14:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.756 14:58:23 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:52.756 14:58:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:52.756 14:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:52.756 14:58:23 -- accel/accel.sh@21 -- # val= 00:07:52.756 14:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:52.756 14:58:23 -- accel/accel.sh@21 -- # val=software 00:07:52.756 14:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.756 14:58:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:52.756 14:58:23 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:52.756 14:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:52.756 14:58:23 -- accel/accel.sh@21 -- # val=32 00:07:52.756 14:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:52.756 14:58:23 -- accel/accel.sh@21 -- # val=32 00:07:52.756 14:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:52.756 14:58:23 -- accel/accel.sh@21 -- # val=1 00:07:52.756 14:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:52.756 14:58:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:52.756 14:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:52.756 14:58:23 -- accel/accel.sh@21 -- # val=No 00:07:52.756 14:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:52.756 14:58:23 -- accel/accel.sh@21 -- # val= 00:07:52.756 14:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:52.756 14:58:23 -- accel/accel.sh@21 -- # val= 00:07:52.756 14:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:52.756 14:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:54.132 14:58:24 -- accel/accel.sh@21 -- # val= 00:07:54.132 14:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.132 14:58:24 -- accel/accel.sh@20 -- # IFS=: 00:07:54.132 14:58:24 -- accel/accel.sh@20 -- # read -r var val 00:07:54.132 14:58:24 -- accel/accel.sh@21 -- # val= 00:07:54.132 14:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.132 14:58:24 -- accel/accel.sh@20 -- # IFS=: 00:07:54.133 14:58:24 -- accel/accel.sh@20 -- # read -r var val 00:07:54.133 14:58:24 -- accel/accel.sh@21 -- # val= 00:07:54.133 14:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.133 14:58:24 -- accel/accel.sh@20 -- # IFS=: 00:07:54.133 14:58:24 -- accel/accel.sh@20 -- # read -r var val 00:07:54.133 14:58:24 -- accel/accel.sh@21 -- # val= 00:07:54.133 14:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.133 14:58:24 -- accel/accel.sh@20 -- # IFS=: 00:07:54.133 14:58:24 -- accel/accel.sh@20 -- # read -r var val 00:07:54.133 14:58:24 -- accel/accel.sh@21 -- # val= 00:07:54.133 14:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.133 14:58:24 -- accel/accel.sh@20 -- # IFS=: 00:07:54.133 14:58:24 -- accel/accel.sh@20 -- # read -r var val 00:07:54.133 14:58:24 -- accel/accel.sh@21 -- # val= 00:07:54.133 14:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.133 14:58:24 -- accel/accel.sh@20 -- # IFS=: 00:07:54.133 14:58:24 -- accel/accel.sh@20 -- # read -r var val 00:07:54.133 14:58:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:54.133 14:58:24 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:54.133 14:58:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.133 00:07:54.133 real 0m2.721s 00:07:54.133 user 0m2.326s 00:07:54.133 sys 0m0.176s 00:07:54.133 14:58:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:54.133 ************************************ 00:07:54.133 END TEST accel_comp 00:07:54.133 ************************************ 00:07:54.133 14:58:24 -- common/autotest_common.sh@10 -- # set +x 00:07:54.133 14:58:24 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:54.133 14:58:24 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:54.133 14:58:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.133 14:58:24 -- common/autotest_common.sh@10 -- # set +x 00:07:54.133 ************************************ 00:07:54.133 START TEST accel_decomp 00:07:54.133 ************************************ 00:07:54.133 14:58:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:54.133 14:58:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:54.133 14:58:24 -- accel/accel.sh@17 -- # local accel_module 00:07:54.133 14:58:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:54.133 14:58:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:54.133 14:58:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.133 14:58:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:54.133 14:58:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.133 14:58:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.133 14:58:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:54.133 14:58:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:54.133 14:58:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:54.133 14:58:24 -- accel/accel.sh@42 -- # jq -r . 00:07:54.133 [2024-11-20 14:58:24.588498] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:54.133 [2024-11-20 14:58:24.589243] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68702 ] 00:07:54.133 [2024-11-20 14:58:24.721811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.133 [2024-11-20 14:58:24.756199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.151 14:58:25 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:55.151 00:07:55.151 SPDK Configuration: 00:07:55.151 Core mask: 0x1 00:07:55.151 00:07:55.151 Accel Perf Configuration: 00:07:55.151 Workload Type: decompress 00:07:55.151 Transfer size: 4096 bytes 00:07:55.151 Vector count 1 00:07:55.151 Module: software 00:07:55.151 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:55.151 Queue depth: 32 00:07:55.151 Allocate depth: 32 00:07:55.151 # threads/core: 1 00:07:55.151 Run time: 1 seconds 00:07:55.151 Verify: Yes 00:07:55.151 00:07:55.151 Running for 1 seconds... 00:07:55.151 00:07:55.151 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:55.151 ------------------------------------------------------------------------------------ 00:07:55.151 0,0 62304/s 114 MiB/s 0 0 00:07:55.151 ==================================================================================== 00:07:55.151 Total 62304/s 243 MiB/s 0 0' 00:07:55.151 14:58:25 -- accel/accel.sh@20 -- # IFS=: 00:07:55.151 14:58:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:55.151 14:58:25 -- accel/accel.sh@20 -- # read -r var val 00:07:55.151 14:58:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:55.151 14:58:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:55.151 14:58:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:55.151 14:58:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.151 14:58:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.151 14:58:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:55.151 14:58:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:55.151 14:58:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:55.151 14:58:25 -- accel/accel.sh@42 -- # jq -r . 00:07:55.151 [2024-11-20 14:58:25.916977] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:55.151 [2024-11-20 14:58:25.917118] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68716 ] 00:07:55.410 [2024-11-20 14:58:26.055132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.410 [2024-11-20 14:58:26.090936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.410 14:58:26 -- accel/accel.sh@21 -- # val= 00:07:55.410 14:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:55.410 14:58:26 -- accel/accel.sh@21 -- # val= 00:07:55.410 14:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:55.410 14:58:26 -- accel/accel.sh@21 -- # val= 00:07:55.410 14:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:55.410 14:58:26 -- accel/accel.sh@21 -- # val=0x1 00:07:55.410 14:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:55.410 14:58:26 -- accel/accel.sh@21 -- # val= 00:07:55.410 14:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:55.410 14:58:26 -- accel/accel.sh@21 -- # val= 00:07:55.410 14:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:55.410 14:58:26 -- accel/accel.sh@21 -- # val=decompress 00:07:55.410 14:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.410 14:58:26 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:55.410 14:58:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:55.410 14:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:55.410 14:58:26 -- accel/accel.sh@21 -- # val= 00:07:55.410 14:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:55.410 14:58:26 -- accel/accel.sh@21 -- # val=software 00:07:55.410 14:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.410 14:58:26 -- accel/accel.sh@23 -- # accel_module=software 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:55.410 14:58:26 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:55.410 14:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:55.410 14:58:26 -- accel/accel.sh@21 -- # val=32 00:07:55.410 14:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:55.410 14:58:26 -- accel/accel.sh@21 -- # val=32 00:07:55.410 14:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:55.410 14:58:26 -- accel/accel.sh@21 -- # val=1 00:07:55.410 14:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:55.410 14:58:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:55.410 14:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:55.410 14:58:26 -- accel/accel.sh@21 -- # val=Yes 00:07:55.410 14:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:55.410 14:58:26 -- accel/accel.sh@21 -- # val= 00:07:55.410 14:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:55.410 14:58:26 -- accel/accel.sh@21 -- # val= 00:07:55.410 14:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:55.410 14:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:56.784 14:58:27 -- accel/accel.sh@21 -- # val= 00:07:56.784 14:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.784 14:58:27 -- accel/accel.sh@20 -- # IFS=: 00:07:56.784 14:58:27 -- accel/accel.sh@20 -- # read -r var val 00:07:56.784 14:58:27 -- accel/accel.sh@21 -- # val= 00:07:56.784 14:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.784 14:58:27 -- accel/accel.sh@20 -- # IFS=: 00:07:56.784 14:58:27 -- accel/accel.sh@20 -- # read -r var val 00:07:56.784 14:58:27 -- accel/accel.sh@21 -- # val= 00:07:56.784 14:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.784 14:58:27 -- accel/accel.sh@20 -- # IFS=: 00:07:56.784 14:58:27 -- accel/accel.sh@20 -- # read -r var val 00:07:56.784 14:58:27 -- accel/accel.sh@21 -- # val= 00:07:56.784 14:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.784 14:58:27 -- accel/accel.sh@20 -- # IFS=: 00:07:56.784 14:58:27 -- accel/accel.sh@20 -- # read -r var val 00:07:56.784 14:58:27 -- accel/accel.sh@21 -- # val= 00:07:56.784 14:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.784 14:58:27 -- accel/accel.sh@20 -- # IFS=: 00:07:56.784 14:58:27 -- accel/accel.sh@20 -- # read -r var val 00:07:56.784 14:58:27 -- accel/accel.sh@21 -- # val= 00:07:56.784 14:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.784 14:58:27 -- accel/accel.sh@20 -- # IFS=: 00:07:56.784 14:58:27 -- accel/accel.sh@20 -- # read -r var val 00:07:56.784 14:58:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:56.784 14:58:27 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:56.784 14:58:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.784 00:07:56.784 real 0m2.664s 00:07:56.784 user 0m2.294s 00:07:56.784 sys 0m0.153s 00:07:56.784 14:58:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:56.784 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:07:56.784 ************************************ 00:07:56.784 END TEST accel_decomp 00:07:56.784 ************************************ 00:07:56.784 14:58:27 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:56.784 14:58:27 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:56.784 14:58:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.784 14:58:27 -- common/autotest_common.sh@10 -- # set +x 00:07:56.784 ************************************ 00:07:56.784 START TEST accel_decmop_full 00:07:56.784 ************************************ 00:07:56.784 14:58:27 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:56.784 14:58:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:56.784 14:58:27 -- accel/accel.sh@17 -- # local accel_module 00:07:56.784 14:58:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:56.784 14:58:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:56.784 14:58:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:56.784 14:58:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:56.784 14:58:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.784 14:58:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.784 14:58:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:56.784 14:58:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:56.784 14:58:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:56.784 14:58:27 -- accel/accel.sh@42 -- # jq -r . 00:07:56.784 [2024-11-20 14:58:27.291314] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:56.784 [2024-11-20 14:58:27.291724] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68749 ] 00:07:56.784 [2024-11-20 14:58:27.420849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.784 [2024-11-20 14:58:27.458348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.160 14:58:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:58.160 00:07:58.160 SPDK Configuration: 00:07:58.160 Core mask: 0x1 00:07:58.160 00:07:58.160 Accel Perf Configuration: 00:07:58.160 Workload Type: decompress 00:07:58.160 Transfer size: 111250 bytes 00:07:58.160 Vector count 1 00:07:58.160 Module: software 00:07:58.160 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:58.160 Queue depth: 32 00:07:58.160 Allocate depth: 32 00:07:58.160 # threads/core: 1 00:07:58.160 Run time: 1 seconds 00:07:58.160 Verify: Yes 00:07:58.160 00:07:58.160 Running for 1 seconds... 00:07:58.160 00:07:58.160 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:58.160 ------------------------------------------------------------------------------------ 00:07:58.160 0,0 4032/s 166 MiB/s 0 0 00:07:58.160 ==================================================================================== 00:07:58.160 Total 4032/s 427 MiB/s 0 0' 00:07:58.160 14:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:58.160 14:58:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:58.160 14:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:58.160 14:58:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:58.160 14:58:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:58.160 14:58:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:58.160 14:58:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.160 14:58:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.160 14:58:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:58.160 14:58:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:58.160 14:58:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:58.160 14:58:28 -- accel/accel.sh@42 -- # jq -r . 00:07:58.160 [2024-11-20 14:58:28.639499] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:58.160 [2024-11-20 14:58:28.639613] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68770 ] 00:07:58.160 [2024-11-20 14:58:28.770613] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.160 [2024-11-20 14:58:28.812410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.160 14:58:28 -- accel/accel.sh@21 -- # val= 00:07:58.160 14:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.160 14:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:58.160 14:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:58.160 14:58:28 -- accel/accel.sh@21 -- # val= 00:07:58.160 14:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.160 14:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:58.160 14:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:58.160 14:58:28 -- accel/accel.sh@21 -- # val= 00:07:58.160 14:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.160 14:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:58.160 14:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:58.160 14:58:28 -- accel/accel.sh@21 -- # val=0x1 00:07:58.160 14:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.160 14:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:58.160 14:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:58.160 14:58:28 -- accel/accel.sh@21 -- # val= 00:07:58.160 14:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.160 14:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:58.160 14:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:58.160 14:58:28 -- accel/accel.sh@21 -- # val= 00:07:58.160 14:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.160 14:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:58.160 14:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:58.160 14:58:28 -- accel/accel.sh@21 -- # val=decompress 00:07:58.160 14:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.160 14:58:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:58.160 14:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:58.160 14:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:58.160 14:58:28 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:58.160 14:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.160 14:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:58.161 14:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:58.161 14:58:28 -- accel/accel.sh@21 -- # val= 00:07:58.161 14:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.161 14:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:58.161 14:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:58.161 14:58:28 -- accel/accel.sh@21 -- # val=software 00:07:58.161 14:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.161 14:58:28 -- accel/accel.sh@23 -- # accel_module=software 00:07:58.161 14:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:58.161 14:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:58.161 14:58:28 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:58.161 14:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.161 14:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:58.161 14:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:58.161 14:58:28 -- accel/accel.sh@21 -- # val=32 00:07:58.161 14:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.161 14:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:58.161 14:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:58.161 14:58:28 -- accel/accel.sh@21 -- # val=32 00:07:58.161 14:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.161 14:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:58.161 14:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:58.161 14:58:28 -- accel/accel.sh@21 -- # val=1 00:07:58.161 14:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.161 14:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:58.161 14:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:58.161 14:58:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:58.161 14:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.161 14:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:58.161 14:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:58.161 14:58:28 -- accel/accel.sh@21 -- # val=Yes 00:07:58.161 14:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.161 14:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:58.161 14:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:58.161 14:58:28 -- accel/accel.sh@21 -- # val= 00:07:58.161 14:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.161 14:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:58.161 14:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:58.161 14:58:28 -- accel/accel.sh@21 -- # val= 00:07:58.161 14:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.161 14:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:58.161 14:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:59.537 14:58:29 -- accel/accel.sh@21 -- # val= 00:07:59.537 14:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.537 14:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:59.537 14:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:59.537 14:58:29 -- accel/accel.sh@21 -- # val= 00:07:59.537 14:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.537 14:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:59.537 14:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:59.537 14:58:29 -- accel/accel.sh@21 -- # val= 00:07:59.537 14:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.537 14:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:59.537 14:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:59.537 14:58:29 -- accel/accel.sh@21 -- # val= 00:07:59.537 14:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.537 14:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:59.537 14:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:59.537 14:58:29 -- accel/accel.sh@21 -- # val= 00:07:59.537 14:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.537 14:58:30 -- accel/accel.sh@20 -- # IFS=: 00:07:59.537 14:58:30 -- accel/accel.sh@20 -- # read -r var val 00:07:59.537 14:58:30 -- accel/accel.sh@21 -- # val= 00:07:59.537 14:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:59.537 14:58:30 -- accel/accel.sh@20 -- # IFS=: 00:07:59.537 14:58:30 -- accel/accel.sh@20 -- # read -r var val 00:07:59.537 14:58:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:59.537 14:58:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:59.537 14:58:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:59.537 00:07:59.537 real 0m2.741s 00:07:59.537 user 0m2.336s 00:07:59.537 sys 0m0.171s 00:07:59.537 14:58:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:59.537 14:58:30 -- common/autotest_common.sh@10 -- # set +x 00:07:59.537 ************************************ 00:07:59.537 END TEST accel_decmop_full 00:07:59.537 ************************************ 00:07:59.537 14:58:30 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:59.537 14:58:30 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:59.537 14:58:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:59.537 14:58:30 -- common/autotest_common.sh@10 -- # set +x 00:07:59.537 ************************************ 00:07:59.537 START TEST accel_decomp_mcore 00:07:59.537 ************************************ 00:07:59.537 14:58:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:59.537 14:58:30 -- accel/accel.sh@16 -- # local accel_opc 00:07:59.537 14:58:30 -- accel/accel.sh@17 -- # local accel_module 00:07:59.537 14:58:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:59.537 14:58:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:59.537 14:58:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:59.537 14:58:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:59.537 14:58:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:59.537 14:58:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:59.537 14:58:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:59.537 14:58:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:59.537 14:58:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:59.537 14:58:30 -- accel/accel.sh@42 -- # jq -r . 00:07:59.537 [2024-11-20 14:58:30.240217] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:59.537 [2024-11-20 14:58:30.244680] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68799 ] 00:07:59.796 [2024-11-20 14:58:30.391602] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:59.796 [2024-11-20 14:58:30.458863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.796 [2024-11-20 14:58:30.459199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.796 [2024-11-20 14:58:30.462216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:59.796 [2024-11-20 14:58:30.465272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.227 14:58:31 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:01.227 00:08:01.227 SPDK Configuration: 00:08:01.227 Core mask: 0xf 00:08:01.227 00:08:01.227 Accel Perf Configuration: 00:08:01.227 Workload Type: decompress 00:08:01.227 Transfer size: 4096 bytes 00:08:01.227 Vector count 1 00:08:01.227 Module: software 00:08:01.227 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:01.227 Queue depth: 32 00:08:01.227 Allocate depth: 32 00:08:01.227 # threads/core: 1 00:08:01.227 Run time: 1 seconds 00:08:01.227 Verify: Yes 00:08:01.227 00:08:01.227 Running for 1 seconds... 00:08:01.227 00:08:01.227 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:01.227 ------------------------------------------------------------------------------------ 00:08:01.227 3,0 37184/s 68 MiB/s 0 0 00:08:01.227 2,0 36320/s 66 MiB/s 0 0 00:08:01.227 0,0 37056/s 68 MiB/s 0 0 00:08:01.227 1,0 38016/s 70 MiB/s 0 0 00:08:01.227 ==================================================================================== 00:08:01.227 Total 148576/s 580 MiB/s 0 0' 00:08:01.227 14:58:31 -- accel/accel.sh@20 -- # IFS=: 00:08:01.227 14:58:31 -- accel/accel.sh@20 -- # read -r var val 00:08:01.227 14:58:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:01.227 14:58:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:01.227 14:58:31 -- accel/accel.sh@12 -- # build_accel_config 00:08:01.227 14:58:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:01.227 14:58:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.227 14:58:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.227 14:58:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:01.227 14:58:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:01.227 14:58:31 -- accel/accel.sh@41 -- # local IFS=, 00:08:01.228 14:58:31 -- accel/accel.sh@42 -- # jq -r . 00:08:01.228 [2024-11-20 14:58:31.732092] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:01.228 [2024-11-20 14:58:31.739338] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68827 ] 00:08:01.228 [2024-11-20 14:58:31.938792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:01.228 [2024-11-20 14:58:31.998969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.228 [2024-11-20 14:58:31.999690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.228 [2024-11-20 14:58:32.010936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.228 [2024-11-20 14:58:32.010962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.486 14:58:32 -- accel/accel.sh@21 -- # val= 00:08:01.486 14:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.486 14:58:32 -- accel/accel.sh@20 -- # IFS=: 00:08:01.486 14:58:32 -- accel/accel.sh@20 -- # read -r var val 00:08:01.486 14:58:32 -- accel/accel.sh@21 -- # val= 00:08:01.486 14:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.486 14:58:32 -- accel/accel.sh@20 -- # IFS=: 00:08:01.486 14:58:32 -- accel/accel.sh@20 -- # read -r var val 00:08:01.486 14:58:32 -- accel/accel.sh@21 -- # val= 00:08:01.486 14:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.486 14:58:32 -- accel/accel.sh@20 -- # IFS=: 00:08:01.486 14:58:32 -- accel/accel.sh@20 -- # read -r var val 00:08:01.486 14:58:32 -- accel/accel.sh@21 -- # val=0xf 00:08:01.486 14:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.486 14:58:32 -- accel/accel.sh@20 -- # IFS=: 00:08:01.486 14:58:32 -- accel/accel.sh@20 -- # read -r var val 00:08:01.486 14:58:32 -- accel/accel.sh@21 -- # val= 00:08:01.486 14:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # IFS=: 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # read -r var val 00:08:01.487 14:58:32 -- accel/accel.sh@21 -- # val= 00:08:01.487 14:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # IFS=: 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # read -r var val 00:08:01.487 14:58:32 -- accel/accel.sh@21 -- # val=decompress 00:08:01.487 14:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.487 14:58:32 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # IFS=: 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # read -r var val 00:08:01.487 14:58:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:01.487 14:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # IFS=: 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # read -r var val 00:08:01.487 14:58:32 -- accel/accel.sh@21 -- # val= 00:08:01.487 14:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # IFS=: 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # read -r var val 00:08:01.487 14:58:32 -- accel/accel.sh@21 -- # val=software 00:08:01.487 14:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.487 14:58:32 -- accel/accel.sh@23 -- # accel_module=software 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # IFS=: 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # read -r var val 00:08:01.487 14:58:32 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:01.487 14:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # IFS=: 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # read -r var val 00:08:01.487 14:58:32 -- accel/accel.sh@21 -- # val=32 00:08:01.487 14:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # IFS=: 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # read -r var val 00:08:01.487 14:58:32 -- accel/accel.sh@21 -- # val=32 00:08:01.487 14:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # IFS=: 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # read -r var val 00:08:01.487 14:58:32 -- accel/accel.sh@21 -- # val=1 00:08:01.487 14:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # IFS=: 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # read -r var val 00:08:01.487 14:58:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:01.487 14:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # IFS=: 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # read -r var val 00:08:01.487 14:58:32 -- accel/accel.sh@21 -- # val=Yes 00:08:01.487 14:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # IFS=: 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # read -r var val 00:08:01.487 14:58:32 -- accel/accel.sh@21 -- # val= 00:08:01.487 14:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # IFS=: 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # read -r var val 00:08:01.487 14:58:32 -- accel/accel.sh@21 -- # val= 00:08:01.487 14:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # IFS=: 00:08:01.487 14:58:32 -- accel/accel.sh@20 -- # read -r var val 00:08:02.421 14:58:33 -- accel/accel.sh@21 -- # val= 00:08:02.421 14:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.421 14:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.421 14:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.421 14:58:33 -- accel/accel.sh@21 -- # val= 00:08:02.421 14:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.421 14:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.421 14:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.421 14:58:33 -- accel/accel.sh@21 -- # val= 00:08:02.421 14:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.421 14:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.421 14:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.421 14:58:33 -- accel/accel.sh@21 -- # val= 00:08:02.421 14:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.421 14:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.421 14:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.421 14:58:33 -- accel/accel.sh@21 -- # val= 00:08:02.421 14:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.421 14:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.421 14:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.421 14:58:33 -- accel/accel.sh@21 -- # val= 00:08:02.421 14:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.421 14:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.421 14:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.421 14:58:33 -- accel/accel.sh@21 -- # val= 00:08:02.421 14:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.421 14:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.421 14:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.421 14:58:33 -- accel/accel.sh@21 -- # val= 00:08:02.421 14:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.421 14:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.421 14:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.421 14:58:33 -- accel/accel.sh@21 -- # val= 00:08:02.421 14:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.421 14:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.421 ************************************ 00:08:02.421 END TEST accel_decomp_mcore 00:08:02.422 ************************************ 00:08:02.422 14:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.422 14:58:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:02.422 14:58:33 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:02.422 14:58:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:02.422 00:08:02.422 real 0m2.996s 00:08:02.422 user 0m8.605s 00:08:02.422 sys 0m0.244s 00:08:02.422 14:58:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:02.422 14:58:33 -- common/autotest_common.sh@10 -- # set +x 00:08:02.681 14:58:33 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:02.681 14:58:33 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:02.681 14:58:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.681 14:58:33 -- common/autotest_common.sh@10 -- # set +x 00:08:02.681 ************************************ 00:08:02.681 START TEST accel_decomp_full_mcore 00:08:02.681 ************************************ 00:08:02.681 14:58:33 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:02.681 14:58:33 -- accel/accel.sh@16 -- # local accel_opc 00:08:02.681 14:58:33 -- accel/accel.sh@17 -- # local accel_module 00:08:02.681 14:58:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:02.681 14:58:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:02.681 14:58:33 -- accel/accel.sh@12 -- # build_accel_config 00:08:02.681 14:58:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:02.681 14:58:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.681 14:58:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.681 14:58:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:02.681 14:58:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:02.681 14:58:33 -- accel/accel.sh@41 -- # local IFS=, 00:08:02.681 14:58:33 -- accel/accel.sh@42 -- # jq -r . 00:08:02.681 [2024-11-20 14:58:33.468714] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:02.681 [2024-11-20 14:58:33.468852] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68859 ] 00:08:02.940 [2024-11-20 14:58:33.628208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:02.940 [2024-11-20 14:58:33.688032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.940 [2024-11-20 14:58:33.695016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.940 [2024-11-20 14:58:33.696303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.940 [2024-11-20 14:58:33.696338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.384 14:58:34 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:04.384 00:08:04.384 SPDK Configuration: 00:08:04.384 Core mask: 0xf 00:08:04.384 00:08:04.384 Accel Perf Configuration: 00:08:04.384 Workload Type: decompress 00:08:04.384 Transfer size: 111250 bytes 00:08:04.384 Vector count 1 00:08:04.384 Module: software 00:08:04.384 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:04.384 Queue depth: 32 00:08:04.384 Allocate depth: 32 00:08:04.384 # threads/core: 1 00:08:04.384 Run time: 1 seconds 00:08:04.384 Verify: Yes 00:08:04.384 00:08:04.384 Running for 1 seconds... 00:08:04.384 00:08:04.384 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:04.384 ------------------------------------------------------------------------------------ 00:08:04.384 0,0 2336/s 96 MiB/s 0 0 00:08:04.384 3,0 2816/s 116 MiB/s 0 0 00:08:04.384 2,0 2848/s 117 MiB/s 0 0 00:08:04.384 1,0 3008/s 124 MiB/s 0 0 00:08:04.384 ==================================================================================== 00:08:04.384 Total 11008/s 1167 MiB/s 0 0' 00:08:04.384 14:58:34 -- accel/accel.sh@20 -- # IFS=: 00:08:04.384 14:58:34 -- accel/accel.sh@20 -- # read -r var val 00:08:04.384 14:58:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:04.384 14:58:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:04.384 14:58:34 -- accel/accel.sh@12 -- # build_accel_config 00:08:04.384 14:58:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:04.384 14:58:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.384 14:58:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.384 14:58:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:04.384 14:58:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:04.384 14:58:34 -- accel/accel.sh@41 -- # local IFS=, 00:08:04.384 14:58:34 -- accel/accel.sh@42 -- # jq -r . 00:08:04.384 [2024-11-20 14:58:34.998963] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:04.384 [2024-11-20 14:58:34.999492] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68886 ] 00:08:04.643 [2024-11-20 14:58:35.207430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:04.643 [2024-11-20 14:58:35.263091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.643 [2024-11-20 14:58:35.262883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.643 [2024-11-20 14:58:35.263040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.643 [2024-11-20 14:58:35.262936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.643 14:58:35 -- accel/accel.sh@21 -- # val= 00:08:04.643 14:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # IFS=: 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # read -r var val 00:08:04.643 14:58:35 -- accel/accel.sh@21 -- # val= 00:08:04.643 14:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # IFS=: 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # read -r var val 00:08:04.643 14:58:35 -- accel/accel.sh@21 -- # val= 00:08:04.643 14:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # IFS=: 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # read -r var val 00:08:04.643 14:58:35 -- accel/accel.sh@21 -- # val=0xf 00:08:04.643 14:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # IFS=: 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # read -r var val 00:08:04.643 14:58:35 -- accel/accel.sh@21 -- # val= 00:08:04.643 14:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # IFS=: 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # read -r var val 00:08:04.643 14:58:35 -- accel/accel.sh@21 -- # val= 00:08:04.643 14:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # IFS=: 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # read -r var val 00:08:04.643 14:58:35 -- accel/accel.sh@21 -- # val=decompress 00:08:04.643 14:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.643 14:58:35 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # IFS=: 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # read -r var val 00:08:04.643 14:58:35 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:04.643 14:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # IFS=: 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # read -r var val 00:08:04.643 14:58:35 -- accel/accel.sh@21 -- # val= 00:08:04.643 14:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # IFS=: 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # read -r var val 00:08:04.643 14:58:35 -- accel/accel.sh@21 -- # val=software 00:08:04.643 14:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.643 14:58:35 -- accel/accel.sh@23 -- # accel_module=software 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # IFS=: 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # read -r var val 00:08:04.643 14:58:35 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:04.643 14:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # IFS=: 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # read -r var val 00:08:04.643 14:58:35 -- accel/accel.sh@21 -- # val=32 00:08:04.643 14:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # IFS=: 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # read -r var val 00:08:04.643 14:58:35 -- accel/accel.sh@21 -- # val=32 00:08:04.643 14:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # IFS=: 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # read -r var val 00:08:04.643 14:58:35 -- accel/accel.sh@21 -- # val=1 00:08:04.643 14:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # IFS=: 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # read -r var val 00:08:04.643 14:58:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:04.643 14:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # IFS=: 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # read -r var val 00:08:04.643 14:58:35 -- accel/accel.sh@21 -- # val=Yes 00:08:04.643 14:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # IFS=: 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # read -r var val 00:08:04.643 14:58:35 -- accel/accel.sh@21 -- # val= 00:08:04.643 14:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # IFS=: 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # read -r var val 00:08:04.643 14:58:35 -- accel/accel.sh@21 -- # val= 00:08:04.643 14:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # IFS=: 00:08:04.643 14:58:35 -- accel/accel.sh@20 -- # read -r var val 00:08:06.018 14:58:36 -- accel/accel.sh@21 -- # val= 00:08:06.018 14:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.018 14:58:36 -- accel/accel.sh@20 -- # IFS=: 00:08:06.018 14:58:36 -- accel/accel.sh@20 -- # read -r var val 00:08:06.018 14:58:36 -- accel/accel.sh@21 -- # val= 00:08:06.018 14:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.018 14:58:36 -- accel/accel.sh@20 -- # IFS=: 00:08:06.018 14:58:36 -- accel/accel.sh@20 -- # read -r var val 00:08:06.018 14:58:36 -- accel/accel.sh@21 -- # val= 00:08:06.018 14:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.018 14:58:36 -- accel/accel.sh@20 -- # IFS=: 00:08:06.018 14:58:36 -- accel/accel.sh@20 -- # read -r var val 00:08:06.018 14:58:36 -- accel/accel.sh@21 -- # val= 00:08:06.018 14:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.018 14:58:36 -- accel/accel.sh@20 -- # IFS=: 00:08:06.018 14:58:36 -- accel/accel.sh@20 -- # read -r var val 00:08:06.018 14:58:36 -- accel/accel.sh@21 -- # val= 00:08:06.018 14:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.018 14:58:36 -- accel/accel.sh@20 -- # IFS=: 00:08:06.018 14:58:36 -- accel/accel.sh@20 -- # read -r var val 00:08:06.018 14:58:36 -- accel/accel.sh@21 -- # val= 00:08:06.018 14:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.018 14:58:36 -- accel/accel.sh@20 -- # IFS=: 00:08:06.018 14:58:36 -- accel/accel.sh@20 -- # read -r var val 00:08:06.018 14:58:36 -- accel/accel.sh@21 -- # val= 00:08:06.018 14:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.018 14:58:36 -- accel/accel.sh@20 -- # IFS=: 00:08:06.018 14:58:36 -- accel/accel.sh@20 -- # read -r var val 00:08:06.018 14:58:36 -- accel/accel.sh@21 -- # val= 00:08:06.018 14:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.018 14:58:36 -- accel/accel.sh@20 -- # IFS=: 00:08:06.018 14:58:36 -- accel/accel.sh@20 -- # read -r var val 00:08:06.018 14:58:36 -- accel/accel.sh@21 -- # val= 00:08:06.018 14:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.018 14:58:36 -- accel/accel.sh@20 -- # IFS=: 00:08:06.018 14:58:36 -- accel/accel.sh@20 -- # read -r var val 00:08:06.018 14:58:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:06.018 14:58:36 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:06.018 ************************************ 00:08:06.018 END TEST accel_decomp_full_mcore 00:08:06.018 ************************************ 00:08:06.018 14:58:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:06.018 00:08:06.018 real 0m3.110s 00:08:06.018 user 0m8.424s 00:08:06.018 sys 0m0.231s 00:08:06.018 14:58:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:06.018 14:58:36 -- common/autotest_common.sh@10 -- # set +x 00:08:06.018 14:58:36 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:06.018 14:58:36 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:06.018 14:58:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:06.018 14:58:36 -- common/autotest_common.sh@10 -- # set +x 00:08:06.291 ************************************ 00:08:06.291 START TEST accel_decomp_mthread 00:08:06.291 ************************************ 00:08:06.291 14:58:36 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:06.291 14:58:36 -- accel/accel.sh@16 -- # local accel_opc 00:08:06.291 14:58:36 -- accel/accel.sh@17 -- # local accel_module 00:08:06.291 14:58:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:06.291 14:58:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:06.291 14:58:36 -- accel/accel.sh@12 -- # build_accel_config 00:08:06.291 14:58:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:06.291 14:58:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.291 14:58:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.291 14:58:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:06.291 14:58:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:06.291 14:58:36 -- accel/accel.sh@41 -- # local IFS=, 00:08:06.291 14:58:36 -- accel/accel.sh@42 -- # jq -r . 00:08:06.291 [2024-11-20 14:58:36.875120] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:06.291 [2024-11-20 14:58:36.875811] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68919 ] 00:08:06.291 [2024-11-20 14:58:37.054872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.555 [2024-11-20 14:58:37.115284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.929 14:58:38 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:07.929 00:08:07.929 SPDK Configuration: 00:08:07.929 Core mask: 0x1 00:08:07.929 00:08:07.929 Accel Perf Configuration: 00:08:07.929 Workload Type: decompress 00:08:07.929 Transfer size: 4096 bytes 00:08:07.929 Vector count 1 00:08:07.929 Module: software 00:08:07.929 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:07.929 Queue depth: 32 00:08:07.929 Allocate depth: 32 00:08:07.929 # threads/core: 2 00:08:07.929 Run time: 1 seconds 00:08:07.929 Verify: Yes 00:08:07.929 00:08:07.929 Running for 1 seconds... 00:08:07.929 00:08:07.929 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:07.929 ------------------------------------------------------------------------------------ 00:08:07.929 0,1 20096/s 37 MiB/s 0 0 00:08:07.929 0,0 20000/s 36 MiB/s 0 0 00:08:07.929 ==================================================================================== 00:08:07.929 Total 40096/s 156 MiB/s 0 0' 00:08:07.929 14:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:07.929 14:58:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:07.929 14:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:07.929 14:58:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:07.929 14:58:38 -- accel/accel.sh@12 -- # build_accel_config 00:08:07.929 14:58:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:07.929 14:58:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.929 14:58:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.929 14:58:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:07.929 14:58:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:07.929 14:58:38 -- accel/accel.sh@41 -- # local IFS=, 00:08:07.929 14:58:38 -- accel/accel.sh@42 -- # jq -r . 00:08:07.929 [2024-11-20 14:58:38.323627] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:07.929 [2024-11-20 14:58:38.323777] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68940 ] 00:08:07.929 [2024-11-20 14:58:38.480798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.929 [2024-11-20 14:58:38.534730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.929 14:58:38 -- accel/accel.sh@21 -- # val= 00:08:07.929 14:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.929 14:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:07.929 14:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:07.929 14:58:38 -- accel/accel.sh@21 -- # val= 00:08:07.929 14:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.929 14:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:07.929 14:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:07.929 14:58:38 -- accel/accel.sh@21 -- # val= 00:08:07.929 14:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.929 14:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:07.929 14:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:07.929 14:58:38 -- accel/accel.sh@21 -- # val=0x1 00:08:07.929 14:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.929 14:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:07.929 14:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:07.929 14:58:38 -- accel/accel.sh@21 -- # val= 00:08:07.929 14:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.929 14:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:07.929 14:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:07.929 14:58:38 -- accel/accel.sh@21 -- # val= 00:08:07.929 14:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.929 14:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:07.929 14:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:07.929 14:58:38 -- accel/accel.sh@21 -- # val=decompress 00:08:07.929 14:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.929 14:58:38 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:07.929 14:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:07.929 14:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:07.929 14:58:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:07.929 14:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.929 14:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:07.929 14:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:07.929 14:58:38 -- accel/accel.sh@21 -- # val= 00:08:07.930 14:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.930 14:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:07.930 14:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:07.930 14:58:38 -- accel/accel.sh@21 -- # val=software 00:08:07.930 14:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.930 14:58:38 -- accel/accel.sh@23 -- # accel_module=software 00:08:07.930 14:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:07.930 14:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:07.930 14:58:38 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:07.930 14:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.930 14:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:07.930 14:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:07.930 14:58:38 -- accel/accel.sh@21 -- # val=32 00:08:07.930 14:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.930 14:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:07.930 14:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:07.930 14:58:38 -- accel/accel.sh@21 -- # val=32 00:08:07.930 14:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.930 14:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:07.930 14:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:07.930 14:58:38 -- accel/accel.sh@21 -- # val=2 00:08:07.930 14:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.930 14:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:07.930 14:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:07.930 14:58:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:07.930 14:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.930 14:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:07.930 14:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:07.930 14:58:38 -- accel/accel.sh@21 -- # val=Yes 00:08:07.930 14:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.930 14:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:07.930 14:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:07.930 14:58:38 -- accel/accel.sh@21 -- # val= 00:08:07.930 14:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.930 14:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:07.930 14:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:07.930 14:58:38 -- accel/accel.sh@21 -- # val= 00:08:07.930 14:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:07.930 14:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:07.930 14:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:09.305 14:58:39 -- accel/accel.sh@21 -- # val= 00:08:09.305 14:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.305 14:58:39 -- accel/accel.sh@20 -- # IFS=: 00:08:09.305 14:58:39 -- accel/accel.sh@20 -- # read -r var val 00:08:09.305 14:58:39 -- accel/accel.sh@21 -- # val= 00:08:09.305 14:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.305 14:58:39 -- accel/accel.sh@20 -- # IFS=: 00:08:09.305 14:58:39 -- accel/accel.sh@20 -- # read -r var val 00:08:09.305 14:58:39 -- accel/accel.sh@21 -- # val= 00:08:09.305 14:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.305 14:58:39 -- accel/accel.sh@20 -- # IFS=: 00:08:09.305 14:58:39 -- accel/accel.sh@20 -- # read -r var val 00:08:09.305 14:58:39 -- accel/accel.sh@21 -- # val= 00:08:09.305 14:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.305 14:58:39 -- accel/accel.sh@20 -- # IFS=: 00:08:09.305 14:58:39 -- accel/accel.sh@20 -- # read -r var val 00:08:09.305 14:58:39 -- accel/accel.sh@21 -- # val= 00:08:09.305 14:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.305 14:58:39 -- accel/accel.sh@20 -- # IFS=: 00:08:09.305 14:58:39 -- accel/accel.sh@20 -- # read -r var val 00:08:09.305 14:58:39 -- accel/accel.sh@21 -- # val= 00:08:09.305 14:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.305 14:58:39 -- accel/accel.sh@20 -- # IFS=: 00:08:09.305 ************************************ 00:08:09.305 END TEST accel_decomp_mthread 00:08:09.305 ************************************ 00:08:09.305 14:58:39 -- accel/accel.sh@20 -- # read -r var val 00:08:09.305 14:58:39 -- accel/accel.sh@21 -- # val= 00:08:09.305 14:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.305 14:58:39 -- accel/accel.sh@20 -- # IFS=: 00:08:09.305 14:58:39 -- accel/accel.sh@20 -- # read -r var val 00:08:09.305 14:58:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:09.305 14:58:39 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:09.305 14:58:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:09.305 00:08:09.305 real 0m2.873s 00:08:09.305 user 0m2.253s 00:08:09.305 sys 0m0.198s 00:08:09.305 14:58:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:09.305 14:58:39 -- common/autotest_common.sh@10 -- # set +x 00:08:09.305 14:58:39 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:09.305 14:58:39 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:09.305 14:58:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:09.305 14:58:39 -- common/autotest_common.sh@10 -- # set +x 00:08:09.305 ************************************ 00:08:09.305 START TEST accel_deomp_full_mthread 00:08:09.305 ************************************ 00:08:09.305 14:58:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:09.305 14:58:39 -- accel/accel.sh@16 -- # local accel_opc 00:08:09.305 14:58:39 -- accel/accel.sh@17 -- # local accel_module 00:08:09.305 14:58:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:09.305 14:58:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:09.305 14:58:39 -- accel/accel.sh@12 -- # build_accel_config 00:08:09.305 14:58:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:09.305 14:58:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.305 14:58:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.305 14:58:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:09.305 14:58:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:09.305 14:58:39 -- accel/accel.sh@41 -- # local IFS=, 00:08:09.305 14:58:39 -- accel/accel.sh@42 -- # jq -r . 00:08:09.305 [2024-11-20 14:58:39.764053] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:09.305 [2024-11-20 14:58:39.764482] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68973 ] 00:08:09.305 [2024-11-20 14:58:39.902268] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.305 [2024-11-20 14:58:39.945150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.681 14:58:41 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:10.681 00:08:10.681 SPDK Configuration: 00:08:10.681 Core mask: 0x1 00:08:10.681 00:08:10.681 Accel Perf Configuration: 00:08:10.681 Workload Type: decompress 00:08:10.681 Transfer size: 111250 bytes 00:08:10.681 Vector count 1 00:08:10.681 Module: software 00:08:10.681 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:10.681 Queue depth: 32 00:08:10.681 Allocate depth: 32 00:08:10.681 # threads/core: 2 00:08:10.681 Run time: 1 seconds 00:08:10.681 Verify: Yes 00:08:10.681 00:08:10.681 Running for 1 seconds... 00:08:10.681 00:08:10.681 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:10.681 ------------------------------------------------------------------------------------ 00:08:10.681 0,1 1440/s 59 MiB/s 0 0 00:08:10.681 0,0 1408/s 58 MiB/s 0 0 00:08:10.681 ==================================================================================== 00:08:10.681 Total 2848/s 302 MiB/s 0 0' 00:08:10.681 14:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.681 14:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.681 14:58:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:10.681 14:58:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:10.681 14:58:41 -- accel/accel.sh@12 -- # build_accel_config 00:08:10.681 14:58:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:10.681 14:58:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.681 14:58:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.681 14:58:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:10.681 14:58:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:10.681 14:58:41 -- accel/accel.sh@41 -- # local IFS=, 00:08:10.681 14:58:41 -- accel/accel.sh@42 -- # jq -r . 00:08:10.681 [2024-11-20 14:58:41.186037] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:10.681 [2024-11-20 14:58:41.186187] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68993 ] 00:08:10.681 [2024-11-20 14:58:41.334532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.681 [2024-11-20 14:58:41.399199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.681 14:58:41 -- accel/accel.sh@21 -- # val= 00:08:10.681 14:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.682 14:58:41 -- accel/accel.sh@21 -- # val= 00:08:10.682 14:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.682 14:58:41 -- accel/accel.sh@21 -- # val= 00:08:10.682 14:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.682 14:58:41 -- accel/accel.sh@21 -- # val=0x1 00:08:10.682 14:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.682 14:58:41 -- accel/accel.sh@21 -- # val= 00:08:10.682 14:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.682 14:58:41 -- accel/accel.sh@21 -- # val= 00:08:10.682 14:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.682 14:58:41 -- accel/accel.sh@21 -- # val=decompress 00:08:10.682 14:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.682 14:58:41 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.682 14:58:41 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:10.682 14:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.682 14:58:41 -- accel/accel.sh@21 -- # val= 00:08:10.682 14:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.682 14:58:41 -- accel/accel.sh@21 -- # val=software 00:08:10.682 14:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.682 14:58:41 -- accel/accel.sh@23 -- # accel_module=software 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.682 14:58:41 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:10.682 14:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.682 14:58:41 -- accel/accel.sh@21 -- # val=32 00:08:10.682 14:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.682 14:58:41 -- accel/accel.sh@21 -- # val=32 00:08:10.682 14:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.682 14:58:41 -- accel/accel.sh@21 -- # val=2 00:08:10.682 14:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.682 14:58:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:10.682 14:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.682 14:58:41 -- accel/accel.sh@21 -- # val=Yes 00:08:10.682 14:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.682 14:58:41 -- accel/accel.sh@21 -- # val= 00:08:10.682 14:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.682 14:58:41 -- accel/accel.sh@21 -- # val= 00:08:10.682 14:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.682 14:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:12.057 14:58:42 -- accel/accel.sh@21 -- # val= 00:08:12.057 14:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.057 14:58:42 -- accel/accel.sh@20 -- # IFS=: 00:08:12.057 14:58:42 -- accel/accel.sh@20 -- # read -r var val 00:08:12.057 14:58:42 -- accel/accel.sh@21 -- # val= 00:08:12.057 14:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.057 14:58:42 -- accel/accel.sh@20 -- # IFS=: 00:08:12.057 14:58:42 -- accel/accel.sh@20 -- # read -r var val 00:08:12.057 14:58:42 -- accel/accel.sh@21 -- # val= 00:08:12.057 14:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.057 14:58:42 -- accel/accel.sh@20 -- # IFS=: 00:08:12.057 14:58:42 -- accel/accel.sh@20 -- # read -r var val 00:08:12.057 14:58:42 -- accel/accel.sh@21 -- # val= 00:08:12.057 14:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.057 14:58:42 -- accel/accel.sh@20 -- # IFS=: 00:08:12.057 14:58:42 -- accel/accel.sh@20 -- # read -r var val 00:08:12.057 14:58:42 -- accel/accel.sh@21 -- # val= 00:08:12.057 14:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.057 14:58:42 -- accel/accel.sh@20 -- # IFS=: 00:08:12.057 14:58:42 -- accel/accel.sh@20 -- # read -r var val 00:08:12.057 14:58:42 -- accel/accel.sh@21 -- # val= 00:08:12.057 ************************************ 00:08:12.057 END TEST accel_deomp_full_mthread 00:08:12.057 ************************************ 00:08:12.057 14:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.057 14:58:42 -- accel/accel.sh@20 -- # IFS=: 00:08:12.057 14:58:42 -- accel/accel.sh@20 -- # read -r var val 00:08:12.057 14:58:42 -- accel/accel.sh@21 -- # val= 00:08:12.057 14:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.057 14:58:42 -- accel/accel.sh@20 -- # IFS=: 00:08:12.057 14:58:42 -- accel/accel.sh@20 -- # read -r var val 00:08:12.057 14:58:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:12.057 14:58:42 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:12.057 14:58:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:12.057 00:08:12.057 real 0m2.866s 00:08:12.057 user 0m2.339s 00:08:12.057 sys 0m0.191s 00:08:12.057 14:58:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:12.057 14:58:42 -- common/autotest_common.sh@10 -- # set +x 00:08:12.057 14:58:42 -- accel/accel.sh@116 -- # [[ n == y ]] 00:08:12.057 14:58:42 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:12.057 14:58:42 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:12.057 14:58:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:12.057 14:58:42 -- common/autotest_common.sh@10 -- # set +x 00:08:12.057 14:58:42 -- accel/accel.sh@129 -- # build_accel_config 00:08:12.057 14:58:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:12.057 14:58:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.057 14:58:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.057 14:58:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:12.057 14:58:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:12.057 14:58:42 -- accel/accel.sh@41 -- # local IFS=, 00:08:12.057 14:58:42 -- accel/accel.sh@42 -- # jq -r . 00:08:12.057 ************************************ 00:08:12.057 START TEST accel_dif_functional_tests 00:08:12.057 ************************************ 00:08:12.057 14:58:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:12.057 [2024-11-20 14:58:42.781209] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:12.057 [2024-11-20 14:58:42.781363] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69027 ] 00:08:12.317 [2024-11-20 14:58:42.924033] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:12.317 [2024-11-20 14:58:42.982439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.317 [2024-11-20 14:58:42.982543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.317 [2024-11-20 14:58:42.982557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.317 00:08:12.317 00:08:12.318 CUnit - A unit testing framework for C - Version 2.1-3 00:08:12.318 http://cunit.sourceforge.net/ 00:08:12.318 00:08:12.318 00:08:12.318 Suite: accel_dif 00:08:12.318 Test: verify: DIF generated, GUARD check ...passed 00:08:12.318 Test: verify: DIF generated, APPTAG check ...passed 00:08:12.318 Test: verify: DIF generated, REFTAG check ...passed 00:08:12.318 Test: verify: DIF not generated, GUARD check ...[2024-11-20 14:58:43.043046] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:12.318 passed 00:08:12.318 Test: verify: DIF not generated, APPTAG check ...[2024-11-20 14:58:43.043442] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:12.318 passed 00:08:12.318 Test: verify: DIF not generated, REFTAG check ...[2024-11-20 14:58:43.043508] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:12.318 [2024-11-20 14:58:43.043554] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:12.318 [2024-11-20 14:58:43.043599] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:12.318 passed 00:08:12.318 Test: verify: APPTAG correct, APPTAG check ...[2024-11-20 14:58:43.043781] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:12.318 passed 00:08:12.318 Test: verify: APPTAG incorrect, APPTAG check ...[2024-11-20 14:58:43.044145] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:12.318 passed 00:08:12.318 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:12.318 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:12.318 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:12.318 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-11-20 14:58:43.044938] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:12.318 passed 00:08:12.318 Test: generate copy: DIF generated, GUARD check ...passed 00:08:12.318 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:12.318 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:12.318 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:12.318 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:12.318 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:12.318 Test: generate copy: iovecs-len validate ...passed 00:08:12.318 Test: generate copy: buffer alignment validate ...passed 00:08:12.318 00:08:12.318 Run Summary: Type Total Ran Passed Failed Inactive 00:08:12.318 suites 1 1 n/a 0 0 00:08:12.318 tests 20 20 20 0 0 00:08:12.318 asserts 204 204 204 0 n/a 00:08:12.318 00:08:12.318 Elapsed time = 0.007 seconds 00:08:12.318 [2024-11-20 14:58:43.046920] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:12.577 ************************************ 00:08:12.577 END TEST accel_dif_functional_tests 00:08:12.577 ************************************ 00:08:12.577 00:08:12.577 real 0m0.493s 00:08:12.577 user 0m0.544s 00:08:12.577 sys 0m0.130s 00:08:12.577 14:58:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:12.577 14:58:43 -- common/autotest_common.sh@10 -- # set +x 00:08:12.577 00:08:12.577 real 0m59.516s 00:08:12.577 user 1m1.960s 00:08:12.577 sys 0m4.892s 00:08:12.577 ************************************ 00:08:12.577 END TEST accel 00:08:12.577 ************************************ 00:08:12.577 14:58:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:12.577 14:58:43 -- common/autotest_common.sh@10 -- # set +x 00:08:12.836 14:58:43 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:12.836 14:58:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:12.836 14:58:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:12.836 14:58:43 -- common/autotest_common.sh@10 -- # set +x 00:08:12.836 ************************************ 00:08:12.836 START TEST accel_rpc 00:08:12.836 ************************************ 00:08:12.836 14:58:43 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:12.836 * Looking for test storage... 00:08:12.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:12.836 14:58:43 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:12.836 14:58:43 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:12.836 14:58:43 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:13.095 14:58:43 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:13.095 14:58:43 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:13.095 14:58:43 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:13.095 14:58:43 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:13.095 14:58:43 -- scripts/common.sh@335 -- # IFS=.-: 00:08:13.095 14:58:43 -- scripts/common.sh@335 -- # read -ra ver1 00:08:13.095 14:58:43 -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.095 14:58:43 -- scripts/common.sh@336 -- # read -ra ver2 00:08:13.095 14:58:43 -- scripts/common.sh@337 -- # local 'op=<' 00:08:13.095 14:58:43 -- scripts/common.sh@339 -- # ver1_l=2 00:08:13.095 14:58:43 -- scripts/common.sh@340 -- # ver2_l=1 00:08:13.095 14:58:43 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:13.095 14:58:43 -- scripts/common.sh@343 -- # case "$op" in 00:08:13.095 14:58:43 -- scripts/common.sh@344 -- # : 1 00:08:13.095 14:58:43 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:13.095 14:58:43 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.095 14:58:43 -- scripts/common.sh@364 -- # decimal 1 00:08:13.095 14:58:43 -- scripts/common.sh@352 -- # local d=1 00:08:13.095 14:58:43 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.095 14:58:43 -- scripts/common.sh@354 -- # echo 1 00:08:13.095 14:58:43 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:13.095 14:58:43 -- scripts/common.sh@365 -- # decimal 2 00:08:13.095 14:58:43 -- scripts/common.sh@352 -- # local d=2 00:08:13.095 14:58:43 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.095 14:58:43 -- scripts/common.sh@354 -- # echo 2 00:08:13.095 14:58:43 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:13.095 14:58:43 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:13.095 14:58:43 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:13.095 14:58:43 -- scripts/common.sh@367 -- # return 0 00:08:13.095 14:58:43 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.095 14:58:43 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:13.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.095 --rc genhtml_branch_coverage=1 00:08:13.095 --rc genhtml_function_coverage=1 00:08:13.095 --rc genhtml_legend=1 00:08:13.095 --rc geninfo_all_blocks=1 00:08:13.095 --rc geninfo_unexecuted_blocks=1 00:08:13.095 00:08:13.095 ' 00:08:13.095 14:58:43 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:13.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.095 --rc genhtml_branch_coverage=1 00:08:13.095 --rc genhtml_function_coverage=1 00:08:13.095 --rc genhtml_legend=1 00:08:13.095 --rc geninfo_all_blocks=1 00:08:13.095 --rc geninfo_unexecuted_blocks=1 00:08:13.095 00:08:13.095 ' 00:08:13.095 14:58:43 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:13.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.095 --rc genhtml_branch_coverage=1 00:08:13.095 --rc genhtml_function_coverage=1 00:08:13.095 --rc genhtml_legend=1 00:08:13.095 --rc geninfo_all_blocks=1 00:08:13.095 --rc geninfo_unexecuted_blocks=1 00:08:13.095 00:08:13.095 ' 00:08:13.095 14:58:43 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:13.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.095 --rc genhtml_branch_coverage=1 00:08:13.095 --rc genhtml_function_coverage=1 00:08:13.095 --rc genhtml_legend=1 00:08:13.095 --rc geninfo_all_blocks=1 00:08:13.095 --rc geninfo_unexecuted_blocks=1 00:08:13.095 00:08:13.095 ' 00:08:13.095 14:58:43 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:13.095 14:58:43 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=69101 00:08:13.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.095 14:58:43 -- accel/accel_rpc.sh@15 -- # waitforlisten 69101 00:08:13.095 14:58:43 -- common/autotest_common.sh@829 -- # '[' -z 69101 ']' 00:08:13.095 14:58:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.095 14:58:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:13.095 14:58:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.095 14:58:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:13.095 14:58:43 -- common/autotest_common.sh@10 -- # set +x 00:08:13.095 14:58:43 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:13.095 [2024-11-20 14:58:43.776613] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:13.095 [2024-11-20 14:58:43.776753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69101 ] 00:08:13.354 [2024-11-20 14:58:43.916832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.354 [2024-11-20 14:58:43.982995] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:13.354 [2024-11-20 14:58:43.983502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.614 14:58:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:13.614 14:58:44 -- common/autotest_common.sh@862 -- # return 0 00:08:13.614 14:58:44 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:13.614 14:58:44 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:13.614 14:58:44 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:13.614 14:58:44 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:13.614 14:58:44 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:13.614 14:58:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:13.614 14:58:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.614 14:58:44 -- common/autotest_common.sh@10 -- # set +x 00:08:13.614 ************************************ 00:08:13.614 START TEST accel_assign_opcode 00:08:13.614 ************************************ 00:08:13.614 14:58:44 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:08:13.614 14:58:44 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:13.614 14:58:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.614 14:58:44 -- common/autotest_common.sh@10 -- # set +x 00:08:13.614 [2024-11-20 14:58:44.256290] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:13.614 14:58:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.614 14:58:44 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:13.614 14:58:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.614 14:58:44 -- common/autotest_common.sh@10 -- # set +x 00:08:13.614 [2024-11-20 14:58:44.264289] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:13.614 14:58:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.614 14:58:44 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:13.614 14:58:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.614 14:58:44 -- common/autotest_common.sh@10 -- # set +x 00:08:13.873 14:58:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.873 14:58:44 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:13.873 14:58:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.873 14:58:44 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:13.873 14:58:44 -- accel/accel_rpc.sh@42 -- # grep software 00:08:13.873 14:58:44 -- common/autotest_common.sh@10 -- # set +x 00:08:13.873 14:58:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.873 software 00:08:13.873 ************************************ 00:08:13.873 END TEST accel_assign_opcode 00:08:13.873 ************************************ 00:08:13.873 00:08:13.873 real 0m0.235s 00:08:13.873 user 0m0.062s 00:08:13.873 sys 0m0.010s 00:08:13.873 14:58:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:13.873 14:58:44 -- common/autotest_common.sh@10 -- # set +x 00:08:14.132 14:58:44 -- accel/accel_rpc.sh@55 -- # killprocess 69101 00:08:14.132 14:58:44 -- common/autotest_common.sh@936 -- # '[' -z 69101 ']' 00:08:14.132 14:58:44 -- common/autotest_common.sh@940 -- # kill -0 69101 00:08:14.132 14:58:44 -- common/autotest_common.sh@941 -- # uname 00:08:14.132 14:58:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:14.132 14:58:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69101 00:08:14.132 killing process with pid 69101 00:08:14.132 14:58:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:14.132 14:58:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:14.132 14:58:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69101' 00:08:14.132 14:58:44 -- common/autotest_common.sh@955 -- # kill 69101 00:08:14.132 14:58:44 -- common/autotest_common.sh@960 -- # wait 69101 00:08:14.390 00:08:14.390 real 0m1.611s 00:08:14.390 user 0m1.559s 00:08:14.390 sys 0m0.436s 00:08:14.390 ************************************ 00:08:14.390 END TEST accel_rpc 00:08:14.390 ************************************ 00:08:14.390 14:58:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:14.390 14:58:45 -- common/autotest_common.sh@10 -- # set +x 00:08:14.390 14:58:45 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:14.390 14:58:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:14.390 14:58:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.390 14:58:45 -- common/autotest_common.sh@10 -- # set +x 00:08:14.390 ************************************ 00:08:14.390 START TEST app_cmdline 00:08:14.390 ************************************ 00:08:14.390 14:58:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:14.390 * Looking for test storage... 00:08:14.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:14.390 14:58:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:14.390 14:58:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:14.390 14:58:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:14.650 14:58:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:14.650 14:58:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:14.650 14:58:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:14.650 14:58:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:14.650 14:58:45 -- scripts/common.sh@335 -- # IFS=.-: 00:08:14.650 14:58:45 -- scripts/common.sh@335 -- # read -ra ver1 00:08:14.650 14:58:45 -- scripts/common.sh@336 -- # IFS=.-: 00:08:14.650 14:58:45 -- scripts/common.sh@336 -- # read -ra ver2 00:08:14.650 14:58:45 -- scripts/common.sh@337 -- # local 'op=<' 00:08:14.650 14:58:45 -- scripts/common.sh@339 -- # ver1_l=2 00:08:14.650 14:58:45 -- scripts/common.sh@340 -- # ver2_l=1 00:08:14.650 14:58:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:14.650 14:58:45 -- scripts/common.sh@343 -- # case "$op" in 00:08:14.650 14:58:45 -- scripts/common.sh@344 -- # : 1 00:08:14.650 14:58:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:14.650 14:58:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:14.650 14:58:45 -- scripts/common.sh@364 -- # decimal 1 00:08:14.650 14:58:45 -- scripts/common.sh@352 -- # local d=1 00:08:14.650 14:58:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:14.650 14:58:45 -- scripts/common.sh@354 -- # echo 1 00:08:14.650 14:58:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:14.650 14:58:45 -- scripts/common.sh@365 -- # decimal 2 00:08:14.650 14:58:45 -- scripts/common.sh@352 -- # local d=2 00:08:14.650 14:58:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:14.650 14:58:45 -- scripts/common.sh@354 -- # echo 2 00:08:14.650 14:58:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:14.650 14:58:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:14.650 14:58:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:14.650 14:58:45 -- scripts/common.sh@367 -- # return 0 00:08:14.650 14:58:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:14.650 14:58:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:14.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.650 --rc genhtml_branch_coverage=1 00:08:14.650 --rc genhtml_function_coverage=1 00:08:14.650 --rc genhtml_legend=1 00:08:14.650 --rc geninfo_all_blocks=1 00:08:14.650 --rc geninfo_unexecuted_blocks=1 00:08:14.650 00:08:14.650 ' 00:08:14.650 14:58:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:14.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.650 --rc genhtml_branch_coverage=1 00:08:14.650 --rc genhtml_function_coverage=1 00:08:14.650 --rc genhtml_legend=1 00:08:14.650 --rc geninfo_all_blocks=1 00:08:14.650 --rc geninfo_unexecuted_blocks=1 00:08:14.650 00:08:14.650 ' 00:08:14.650 14:58:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:14.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.650 --rc genhtml_branch_coverage=1 00:08:14.650 --rc genhtml_function_coverage=1 00:08:14.650 --rc genhtml_legend=1 00:08:14.650 --rc geninfo_all_blocks=1 00:08:14.650 --rc geninfo_unexecuted_blocks=1 00:08:14.650 00:08:14.650 ' 00:08:14.650 14:58:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:14.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:14.650 --rc genhtml_branch_coverage=1 00:08:14.650 --rc genhtml_function_coverage=1 00:08:14.650 --rc genhtml_legend=1 00:08:14.650 --rc geninfo_all_blocks=1 00:08:14.650 --rc geninfo_unexecuted_blocks=1 00:08:14.650 00:08:14.650 ' 00:08:14.650 14:58:45 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:14.650 14:58:45 -- app/cmdline.sh@17 -- # spdk_tgt_pid=69199 00:08:14.650 14:58:45 -- app/cmdline.sh@18 -- # waitforlisten 69199 00:08:14.650 14:58:45 -- common/autotest_common.sh@829 -- # '[' -z 69199 ']' 00:08:14.650 14:58:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.650 14:58:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:14.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.650 14:58:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.650 14:58:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:14.650 14:58:45 -- common/autotest_common.sh@10 -- # set +x 00:08:14.650 14:58:45 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:14.650 [2024-11-20 14:58:45.439315] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:14.650 [2024-11-20 14:58:45.439446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69199 ] 00:08:14.909 [2024-11-20 14:58:45.586182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.909 [2024-11-20 14:58:45.641234] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:14.909 [2024-11-20 14:58:45.641466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.284 14:58:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:16.284 14:58:46 -- common/autotest_common.sh@862 -- # return 0 00:08:16.284 14:58:46 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:16.284 { 00:08:16.284 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:08:16.284 "fields": { 00:08:16.284 "major": 24, 00:08:16.284 "minor": 1, 00:08:16.284 "patch": 1, 00:08:16.284 "suffix": "-pre", 00:08:16.284 "commit": "c13c99a5e" 00:08:16.284 } 00:08:16.284 } 00:08:16.541 14:58:47 -- app/cmdline.sh@22 -- # expected_methods=() 00:08:16.541 14:58:47 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:16.541 14:58:47 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:16.541 14:58:47 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:16.541 14:58:47 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:16.541 14:58:47 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:16.541 14:58:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.541 14:58:47 -- app/cmdline.sh@26 -- # sort 00:08:16.542 14:58:47 -- common/autotest_common.sh@10 -- # set +x 00:08:16.542 14:58:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.542 14:58:47 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:16.542 14:58:47 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:16.542 14:58:47 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:16.542 14:58:47 -- common/autotest_common.sh@650 -- # local es=0 00:08:16.542 14:58:47 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:16.542 14:58:47 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:16.542 14:58:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.542 14:58:47 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:16.542 14:58:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.542 14:58:47 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:16.542 14:58:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.542 14:58:47 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:16.542 14:58:47 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:16.542 14:58:47 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:16.800 request: 00:08:16.800 { 00:08:16.800 "method": "env_dpdk_get_mem_stats", 00:08:16.800 "req_id": 1 00:08:16.800 } 00:08:16.800 Got JSON-RPC error response 00:08:16.800 response: 00:08:16.800 { 00:08:16.800 "code": -32601, 00:08:16.800 "message": "Method not found" 00:08:16.800 } 00:08:16.800 14:58:47 -- common/autotest_common.sh@653 -- # es=1 00:08:16.800 14:58:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:16.800 14:58:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:16.800 14:58:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:16.800 14:58:47 -- app/cmdline.sh@1 -- # killprocess 69199 00:08:16.800 14:58:47 -- common/autotest_common.sh@936 -- # '[' -z 69199 ']' 00:08:16.800 14:58:47 -- common/autotest_common.sh@940 -- # kill -0 69199 00:08:16.800 14:58:47 -- common/autotest_common.sh@941 -- # uname 00:08:16.800 14:58:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:16.800 14:58:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69199 00:08:16.800 killing process with pid 69199 00:08:16.800 14:58:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:16.800 14:58:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:16.800 14:58:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69199' 00:08:16.800 14:58:47 -- common/autotest_common.sh@955 -- # kill 69199 00:08:16.800 14:58:47 -- common/autotest_common.sh@960 -- # wait 69199 00:08:17.058 00:08:17.058 real 0m2.625s 00:08:17.058 user 0m3.562s 00:08:17.058 sys 0m0.442s 00:08:17.058 14:58:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:17.058 ************************************ 00:08:17.058 END TEST app_cmdline 00:08:17.058 ************************************ 00:08:17.058 14:58:47 -- common/autotest_common.sh@10 -- # set +x 00:08:17.058 14:58:47 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:17.058 14:58:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:17.058 14:58:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.058 14:58:47 -- common/autotest_common.sh@10 -- # set +x 00:08:17.058 ************************************ 00:08:17.058 START TEST version 00:08:17.058 ************************************ 00:08:17.058 14:58:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:17.058 * Looking for test storage... 00:08:17.058 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:17.058 14:58:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:17.058 14:58:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:17.058 14:58:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:17.316 14:58:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:17.316 14:58:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:17.316 14:58:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:17.316 14:58:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:17.316 14:58:47 -- scripts/common.sh@335 -- # IFS=.-: 00:08:17.316 14:58:47 -- scripts/common.sh@335 -- # read -ra ver1 00:08:17.316 14:58:47 -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.316 14:58:47 -- scripts/common.sh@336 -- # read -ra ver2 00:08:17.316 14:58:47 -- scripts/common.sh@337 -- # local 'op=<' 00:08:17.316 14:58:47 -- scripts/common.sh@339 -- # ver1_l=2 00:08:17.316 14:58:47 -- scripts/common.sh@340 -- # ver2_l=1 00:08:17.316 14:58:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:17.316 14:58:47 -- scripts/common.sh@343 -- # case "$op" in 00:08:17.316 14:58:47 -- scripts/common.sh@344 -- # : 1 00:08:17.316 14:58:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:17.316 14:58:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.316 14:58:47 -- scripts/common.sh@364 -- # decimal 1 00:08:17.316 14:58:47 -- scripts/common.sh@352 -- # local d=1 00:08:17.316 14:58:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.316 14:58:47 -- scripts/common.sh@354 -- # echo 1 00:08:17.316 14:58:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:17.316 14:58:47 -- scripts/common.sh@365 -- # decimal 2 00:08:17.317 14:58:47 -- scripts/common.sh@352 -- # local d=2 00:08:17.317 14:58:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.317 14:58:47 -- scripts/common.sh@354 -- # echo 2 00:08:17.317 14:58:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:17.317 14:58:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:17.317 14:58:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:17.317 14:58:47 -- scripts/common.sh@367 -- # return 0 00:08:17.317 14:58:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.317 14:58:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:17.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.317 --rc genhtml_branch_coverage=1 00:08:17.317 --rc genhtml_function_coverage=1 00:08:17.317 --rc genhtml_legend=1 00:08:17.317 --rc geninfo_all_blocks=1 00:08:17.317 --rc geninfo_unexecuted_blocks=1 00:08:17.317 00:08:17.317 ' 00:08:17.317 14:58:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:17.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.317 --rc genhtml_branch_coverage=1 00:08:17.317 --rc genhtml_function_coverage=1 00:08:17.317 --rc genhtml_legend=1 00:08:17.317 --rc geninfo_all_blocks=1 00:08:17.317 --rc geninfo_unexecuted_blocks=1 00:08:17.317 00:08:17.317 ' 00:08:17.317 14:58:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:17.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.317 --rc genhtml_branch_coverage=1 00:08:17.317 --rc genhtml_function_coverage=1 00:08:17.317 --rc genhtml_legend=1 00:08:17.317 --rc geninfo_all_blocks=1 00:08:17.317 --rc geninfo_unexecuted_blocks=1 00:08:17.317 00:08:17.317 ' 00:08:17.317 14:58:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:17.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.317 --rc genhtml_branch_coverage=1 00:08:17.317 --rc genhtml_function_coverage=1 00:08:17.317 --rc genhtml_legend=1 00:08:17.317 --rc geninfo_all_blocks=1 00:08:17.317 --rc geninfo_unexecuted_blocks=1 00:08:17.317 00:08:17.317 ' 00:08:17.317 14:58:47 -- app/version.sh@17 -- # get_header_version major 00:08:17.317 14:58:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:17.317 14:58:47 -- app/version.sh@14 -- # cut -f2 00:08:17.317 14:58:47 -- app/version.sh@14 -- # tr -d '"' 00:08:17.317 14:58:47 -- app/version.sh@17 -- # major=24 00:08:17.317 14:58:47 -- app/version.sh@18 -- # get_header_version minor 00:08:17.317 14:58:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:17.317 14:58:47 -- app/version.sh@14 -- # cut -f2 00:08:17.317 14:58:47 -- app/version.sh@14 -- # tr -d '"' 00:08:17.317 14:58:47 -- app/version.sh@18 -- # minor=1 00:08:17.317 14:58:47 -- app/version.sh@19 -- # get_header_version patch 00:08:17.317 14:58:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:17.317 14:58:47 -- app/version.sh@14 -- # tr -d '"' 00:08:17.317 14:58:47 -- app/version.sh@14 -- # cut -f2 00:08:17.317 14:58:47 -- app/version.sh@19 -- # patch=1 00:08:17.317 14:58:47 -- app/version.sh@20 -- # get_header_version suffix 00:08:17.317 14:58:47 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:17.317 14:58:47 -- app/version.sh@14 -- # cut -f2 00:08:17.317 14:58:47 -- app/version.sh@14 -- # tr -d '"' 00:08:17.317 14:58:47 -- app/version.sh@20 -- # suffix=-pre 00:08:17.317 14:58:47 -- app/version.sh@22 -- # version=24.1 00:08:17.317 14:58:47 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:17.317 14:58:47 -- app/version.sh@25 -- # version=24.1.1 00:08:17.317 14:58:47 -- app/version.sh@28 -- # version=24.1.1rc0 00:08:17.317 14:58:47 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:17.317 14:58:47 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:17.317 14:58:48 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:08:17.317 14:58:48 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:08:17.317 00:08:17.317 real 0m0.243s 00:08:17.317 user 0m0.172s 00:08:17.317 sys 0m0.099s 00:08:17.317 14:58:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:17.317 14:58:48 -- common/autotest_common.sh@10 -- # set +x 00:08:17.317 ************************************ 00:08:17.317 END TEST version 00:08:17.317 ************************************ 00:08:17.317 14:58:48 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:08:17.317 14:58:48 -- spdk/autotest.sh@191 -- # uname -s 00:08:17.317 14:58:48 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:08:17.317 14:58:48 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:08:17.317 14:58:48 -- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]] 00:08:17.317 14:58:48 -- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]] 00:08:17.317 14:58:48 -- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:17.317 14:58:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:17.317 14:58:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.317 14:58:48 -- common/autotest_common.sh@10 -- # set +x 00:08:17.317 ************************************ 00:08:17.317 START TEST spdk_dd 00:08:17.317 ************************************ 00:08:17.317 14:58:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:08:17.317 * Looking for test storage... 00:08:17.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:17.317 14:58:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:17.317 14:58:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:17.317 14:58:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:17.575 14:58:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:17.575 14:58:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:17.575 14:58:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:17.575 14:58:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:17.575 14:58:48 -- scripts/common.sh@335 -- # IFS=.-: 00:08:17.575 14:58:48 -- scripts/common.sh@335 -- # read -ra ver1 00:08:17.575 14:58:48 -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.575 14:58:48 -- scripts/common.sh@336 -- # read -ra ver2 00:08:17.575 14:58:48 -- scripts/common.sh@337 -- # local 'op=<' 00:08:17.575 14:58:48 -- scripts/common.sh@339 -- # ver1_l=2 00:08:17.575 14:58:48 -- scripts/common.sh@340 -- # ver2_l=1 00:08:17.575 14:58:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:17.575 14:58:48 -- scripts/common.sh@343 -- # case "$op" in 00:08:17.575 14:58:48 -- scripts/common.sh@344 -- # : 1 00:08:17.575 14:58:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:17.575 14:58:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.575 14:58:48 -- scripts/common.sh@364 -- # decimal 1 00:08:17.575 14:58:48 -- scripts/common.sh@352 -- # local d=1 00:08:17.575 14:58:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.575 14:58:48 -- scripts/common.sh@354 -- # echo 1 00:08:17.576 14:58:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:17.576 14:58:48 -- scripts/common.sh@365 -- # decimal 2 00:08:17.576 14:58:48 -- scripts/common.sh@352 -- # local d=2 00:08:17.576 14:58:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.576 14:58:48 -- scripts/common.sh@354 -- # echo 2 00:08:17.576 14:58:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:17.576 14:58:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:17.576 14:58:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:17.576 14:58:48 -- scripts/common.sh@367 -- # return 0 00:08:17.576 14:58:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.576 14:58:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:17.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.576 --rc genhtml_branch_coverage=1 00:08:17.576 --rc genhtml_function_coverage=1 00:08:17.576 --rc genhtml_legend=1 00:08:17.576 --rc geninfo_all_blocks=1 00:08:17.576 --rc geninfo_unexecuted_blocks=1 00:08:17.576 00:08:17.576 ' 00:08:17.576 14:58:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:17.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.576 --rc genhtml_branch_coverage=1 00:08:17.576 --rc genhtml_function_coverage=1 00:08:17.576 --rc genhtml_legend=1 00:08:17.576 --rc geninfo_all_blocks=1 00:08:17.576 --rc geninfo_unexecuted_blocks=1 00:08:17.576 00:08:17.576 ' 00:08:17.576 14:58:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:17.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.576 --rc genhtml_branch_coverage=1 00:08:17.576 --rc genhtml_function_coverage=1 00:08:17.576 --rc genhtml_legend=1 00:08:17.576 --rc geninfo_all_blocks=1 00:08:17.576 --rc geninfo_unexecuted_blocks=1 00:08:17.576 00:08:17.576 ' 00:08:17.576 14:58:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:17.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.576 --rc genhtml_branch_coverage=1 00:08:17.576 --rc genhtml_function_coverage=1 00:08:17.576 --rc genhtml_legend=1 00:08:17.576 --rc geninfo_all_blocks=1 00:08:17.576 --rc geninfo_unexecuted_blocks=1 00:08:17.576 00:08:17.576 ' 00:08:17.576 14:58:48 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:17.576 14:58:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.576 14:58:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.576 14:58:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.576 14:58:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.576 14:58:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.576 14:58:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.576 14:58:48 -- paths/export.sh@5 -- # export PATH 00:08:17.576 14:58:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.576 14:58:48 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:17.834 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:17.834 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:17.834 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:17.834 14:58:48 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:08:17.835 14:58:48 -- dd/dd.sh@11 -- # nvme_in_userspace 00:08:17.835 14:58:48 -- scripts/common.sh@311 -- # local bdf bdfs 00:08:17.835 14:58:48 -- scripts/common.sh@312 -- # local nvmes 00:08:17.835 14:58:48 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:08:17.835 14:58:48 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:08:17.835 14:58:48 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:08:17.835 14:58:48 -- scripts/common.sh@297 -- # local bdf= 00:08:17.835 14:58:48 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:08:17.835 14:58:48 -- scripts/common.sh@232 -- # local class 00:08:17.835 14:58:48 -- scripts/common.sh@233 -- # local subclass 00:08:17.835 14:58:48 -- scripts/common.sh@234 -- # local progif 00:08:17.835 14:58:48 -- scripts/common.sh@235 -- # printf %02x 1 00:08:17.835 14:58:48 -- scripts/common.sh@235 -- # class=01 00:08:17.835 14:58:48 -- scripts/common.sh@236 -- # printf %02x 8 00:08:17.835 14:58:48 -- scripts/common.sh@236 -- # subclass=08 00:08:17.835 14:58:48 -- scripts/common.sh@237 -- # printf %02x 2 00:08:17.835 14:58:48 -- scripts/common.sh@237 -- # progif=02 00:08:17.835 14:58:48 -- scripts/common.sh@239 -- # hash lspci 00:08:17.835 14:58:48 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:08:17.835 14:58:48 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:08:17.835 14:58:48 -- scripts/common.sh@242 -- # grep -i -- -p02 00:08:17.835 14:58:48 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:08:17.835 14:58:48 -- scripts/common.sh@244 -- # tr -d '"' 00:08:17.835 14:58:48 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:17.835 14:58:48 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:08:17.835 14:58:48 -- scripts/common.sh@15 -- # local i 00:08:17.835 14:58:48 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:08:17.835 14:58:48 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:08:17.835 14:58:48 -- scripts/common.sh@24 -- # return 0 00:08:17.835 14:58:48 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:08:17.835 14:58:48 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:08:17.835 14:58:48 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:08:17.835 14:58:48 -- scripts/common.sh@15 -- # local i 00:08:17.835 14:58:48 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:08:17.835 14:58:48 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:08:17.835 14:58:48 -- scripts/common.sh@24 -- # return 0 00:08:17.835 14:58:48 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:08:17.835 14:58:48 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:08:17.835 14:58:48 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:08:17.835 14:58:48 -- scripts/common.sh@322 -- # uname -s 00:08:17.835 14:58:48 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:08:17.835 14:58:48 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:08:17.835 14:58:48 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:08:17.835 14:58:48 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:08:17.835 14:58:48 -- scripts/common.sh@322 -- # uname -s 00:08:17.835 14:58:48 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:08:17.835 14:58:48 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:08:17.835 14:58:48 -- scripts/common.sh@327 -- # (( 2 )) 00:08:17.835 14:58:48 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:08:17.835 14:58:48 -- dd/dd.sh@13 -- # check_liburing 00:08:17.835 14:58:48 -- dd/common.sh@139 -- # local lib so 00:08:17.835 14:58:48 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:08:17.835 14:58:48 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.5.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.5.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.6.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.5.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.5.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.5.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.5.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.5.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.5.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.5.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.5.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.5.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.9.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.10.1 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.9.1 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_blob.so.10.1 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.12.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.5.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.5.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.5.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.8.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.5.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.6.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.4.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.5.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.5.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.1.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.5.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.6.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.4.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.835 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.2.0 == liburing.so.* ]] 00:08:17.835 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.836 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.11.0 == liburing.so.* ]] 00:08:17.836 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.836 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.3.0 == liburing.so.* ]] 00:08:17.836 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:17.836 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.13.0 == liburing.so.* ]] 00:08:18.095 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.095 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.3.0 == liburing.so.* ]] 00:08:18.095 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.095 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.3.0 == liburing.so.* ]] 00:08:18.095 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.095 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.5.0 == liburing.so.* ]] 00:08:18.095 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.095 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.4.0 == liburing.so.* ]] 00:08:18.095 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.095 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_event.so.12.0 == liburing.so.* ]] 00:08:18.095 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.095 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.5.0 == liburing.so.* ]] 00:08:18.095 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.095 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.14.0 == liburing.so.* ]] 00:08:18.095 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.095 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_notify.so.5.0 == liburing.so.* ]] 00:08:18.095 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.095 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.5.0 == liburing.so.* ]] 00:08:18.095 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.095 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_accel.so.14.0 == liburing.so.* ]] 00:08:18.095 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.095 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_dma.so.3.0 == liburing.so.* ]] 00:08:18.095 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.095 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.5.0 == liburing.so.* ]] 00:08:18.095 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.095 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.5.0 == liburing.so.* ]] 00:08:18.095 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.095 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.4.0 == liburing.so.* ]] 00:08:18.095 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.095 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_sock.so.8.0 == liburing.so.* ]] 00:08:18.095 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.095 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.2.0 == liburing.so.* ]] 00:08:18.095 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.095 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_init.so.4.0 == liburing.so.* ]] 00:08:18.095 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.095 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_thread.so.9.0 == liburing.so.* ]] 00:08:18.095 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.095 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_trace.so.9.0 == liburing.so.* ]] 00:08:18.095 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.095 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.5.0 == liburing.so.* ]] 00:08:18.095 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.095 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.5.1 == liburing.so.* ]] 00:08:18.095 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_json.so.5.1 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_util.so.8.0 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ libspdk_log.so.6.1 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.23 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.23 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ librte_dmadev.so.23 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ librte_eal.so.23 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ librte_ethdev.so.23 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ librte_hash.so.23 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ librte_kvargs.so.23 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ librte_mbuf.so.23 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ librte_mempool.so.23 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.23 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ librte_net.so.23 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ librte_pci.so.23 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ librte_power.so.23 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ librte_rcu.so.23 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ librte_ring.so.23 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ librte_telemetry.so.23 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ librte_vhost.so.23 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ libaccel-config.so.1 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@142 -- # read -r lib _ so _ 00:08:18.096 14:58:48 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:08:18.096 14:58:48 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:08:18.096 * spdk_dd linked to liburing 00:08:18.096 14:58:48 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:18.096 14:58:48 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:18.096 14:58:48 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:18.096 14:58:48 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:18.096 14:58:48 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:18.096 14:58:48 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:18.096 14:58:48 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:18.096 14:58:48 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:18.096 14:58:48 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:18.096 14:58:48 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:18.096 14:58:48 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:18.096 14:58:48 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:18.096 14:58:48 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:18.096 14:58:48 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:18.096 14:58:48 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:18.096 14:58:48 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:18.096 14:58:48 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:18.096 14:58:48 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:18.096 14:58:48 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:18.096 14:58:48 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:18.096 14:58:48 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:18.096 14:58:48 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:18.096 14:58:48 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:18.096 14:58:48 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:18.096 14:58:48 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:18.096 14:58:48 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:18.096 14:58:48 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:18.096 14:58:48 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:18.096 14:58:48 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:18.096 14:58:48 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:18.096 14:58:48 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:18.096 14:58:48 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:18.096 14:58:48 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:18.096 14:58:48 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:18.096 14:58:48 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:18.096 14:58:48 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:18.096 14:58:48 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:18.096 14:58:48 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:08:18.096 14:58:48 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:18.096 14:58:48 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:18.096 14:58:48 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:18.096 14:58:48 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:18.096 14:58:48 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:08:18.096 14:58:48 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:18.096 14:58:48 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:18.096 14:58:48 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:18.096 14:58:48 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:18.096 14:58:48 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:18.096 14:58:48 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:18.096 14:58:48 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:18.096 14:58:48 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:18.096 14:58:48 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:18.096 14:58:48 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:18.096 14:58:48 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:18.096 14:58:48 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=y 00:08:18.096 14:58:48 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:18.096 14:58:48 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:18.096 14:58:48 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:18.096 14:58:48 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:18.096 14:58:48 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:18.096 14:58:48 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:18.096 14:58:48 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:18.096 14:58:48 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:18.096 14:58:48 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:18.096 14:58:48 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:18.096 14:58:48 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:18.097 14:58:48 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:18.097 14:58:48 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:18.097 14:58:48 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:18.097 14:58:48 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:18.097 14:58:48 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:18.097 14:58:48 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:18.097 14:58:48 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:18.097 14:58:48 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:18.097 14:58:48 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:18.097 14:58:48 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:18.097 14:58:48 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:18.097 14:58:48 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:18.097 14:58:48 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:18.097 14:58:48 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:18.097 14:58:48 -- common/build_config.sh@79 -- # CONFIG_URING=y 00:08:18.097 14:58:48 -- dd/common.sh@149 -- # [[ y != y ]] 00:08:18.097 14:58:48 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:08:18.097 14:58:48 -- dd/common.sh@156 -- # export liburing_in_use=1 00:08:18.097 14:58:48 -- dd/common.sh@156 -- # liburing_in_use=1 00:08:18.097 14:58:48 -- dd/common.sh@157 -- # return 0 00:08:18.097 14:58:48 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:08:18.097 14:58:48 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:08:18.097 14:58:48 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:18.097 14:58:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.097 14:58:48 -- common/autotest_common.sh@10 -- # set +x 00:08:18.097 ************************************ 00:08:18.097 START TEST spdk_dd_basic_rw 00:08:18.097 ************************************ 00:08:18.097 14:58:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:08:18.097 * Looking for test storage... 00:08:18.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:18.097 14:58:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:18.097 14:58:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:18.097 14:58:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:18.097 14:58:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:18.097 14:58:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:18.097 14:58:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:18.097 14:58:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:18.097 14:58:48 -- scripts/common.sh@335 -- # IFS=.-: 00:08:18.097 14:58:48 -- scripts/common.sh@335 -- # read -ra ver1 00:08:18.097 14:58:48 -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.097 14:58:48 -- scripts/common.sh@336 -- # read -ra ver2 00:08:18.097 14:58:48 -- scripts/common.sh@337 -- # local 'op=<' 00:08:18.097 14:58:48 -- scripts/common.sh@339 -- # ver1_l=2 00:08:18.097 14:58:48 -- scripts/common.sh@340 -- # ver2_l=1 00:08:18.097 14:58:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:18.097 14:58:48 -- scripts/common.sh@343 -- # case "$op" in 00:08:18.097 14:58:48 -- scripts/common.sh@344 -- # : 1 00:08:18.097 14:58:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:18.097 14:58:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.097 14:58:48 -- scripts/common.sh@364 -- # decimal 1 00:08:18.097 14:58:48 -- scripts/common.sh@352 -- # local d=1 00:08:18.097 14:58:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.097 14:58:48 -- scripts/common.sh@354 -- # echo 1 00:08:18.097 14:58:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:18.097 14:58:48 -- scripts/common.sh@365 -- # decimal 2 00:08:18.097 14:58:48 -- scripts/common.sh@352 -- # local d=2 00:08:18.097 14:58:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.097 14:58:48 -- scripts/common.sh@354 -- # echo 2 00:08:18.097 14:58:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:18.097 14:58:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:18.097 14:58:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:18.097 14:58:48 -- scripts/common.sh@367 -- # return 0 00:08:18.097 14:58:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.097 14:58:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:18.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.097 --rc genhtml_branch_coverage=1 00:08:18.097 --rc genhtml_function_coverage=1 00:08:18.097 --rc genhtml_legend=1 00:08:18.097 --rc geninfo_all_blocks=1 00:08:18.097 --rc geninfo_unexecuted_blocks=1 00:08:18.097 00:08:18.097 ' 00:08:18.097 14:58:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:18.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.097 --rc genhtml_branch_coverage=1 00:08:18.097 --rc genhtml_function_coverage=1 00:08:18.097 --rc genhtml_legend=1 00:08:18.097 --rc geninfo_all_blocks=1 00:08:18.097 --rc geninfo_unexecuted_blocks=1 00:08:18.097 00:08:18.097 ' 00:08:18.097 14:58:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:18.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.097 --rc genhtml_branch_coverage=1 00:08:18.097 --rc genhtml_function_coverage=1 00:08:18.097 --rc genhtml_legend=1 00:08:18.097 --rc geninfo_all_blocks=1 00:08:18.097 --rc geninfo_unexecuted_blocks=1 00:08:18.097 00:08:18.097 ' 00:08:18.097 14:58:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:18.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.097 --rc genhtml_branch_coverage=1 00:08:18.097 --rc genhtml_function_coverage=1 00:08:18.097 --rc genhtml_legend=1 00:08:18.097 --rc geninfo_all_blocks=1 00:08:18.097 --rc geninfo_unexecuted_blocks=1 00:08:18.097 00:08:18.097 ' 00:08:18.097 14:58:48 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:18.097 14:58:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.097 14:58:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.097 14:58:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.097 14:58:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.097 14:58:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.097 14:58:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.097 14:58:48 -- paths/export.sh@5 -- # export PATH 00:08:18.098 14:58:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.098 14:58:48 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:08:18.098 14:58:48 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:08:18.098 14:58:48 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:08:18.098 14:58:48 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:08:18.098 14:58:48 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:08:18.098 14:58:48 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:08:18.098 14:58:48 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:18.098 14:58:48 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:18.098 14:58:48 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:18.098 14:58:48 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:08:18.098 14:58:48 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:08:18.098 14:58:48 -- dd/common.sh@126 -- # mapfile -t id 00:08:18.098 14:58:48 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:08:18.359 14:58:49 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 93 Data Units Written: 9 Host Read Commands: 2150 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:08:18.359 14:58:49 -- dd/common.sh@130 -- # lbaf=04 00:08:18.360 14:58:49 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 93 Data Units Written: 9 Host Read Commands: 2150 Host Write Commands: 95 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:08:18.360 14:58:49 -- dd/common.sh@132 -- # lbaf=4096 00:08:18.360 14:58:49 -- dd/common.sh@134 -- # echo 4096 00:08:18.360 14:58:49 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:08:18.360 14:58:49 -- dd/basic_rw.sh@96 -- # : 00:08:18.360 14:58:49 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:18.360 14:58:49 -- dd/basic_rw.sh@96 -- # gen_conf 00:08:18.360 14:58:49 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:08:18.360 14:58:49 -- dd/common.sh@31 -- # xtrace_disable 00:08:18.360 14:58:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.360 14:58:49 -- common/autotest_common.sh@10 -- # set +x 00:08:18.360 14:58:49 -- common/autotest_common.sh@10 -- # set +x 00:08:18.360 ************************************ 00:08:18.360 START TEST dd_bs_lt_native_bs 00:08:18.360 ************************************ 00:08:18.360 14:58:49 -- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:18.360 14:58:49 -- common/autotest_common.sh@650 -- # local es=0 00:08:18.360 14:58:49 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:18.360 14:58:49 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.360 14:58:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.360 14:58:49 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.360 14:58:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.360 14:58:49 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.360 14:58:49 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:18.360 14:58:49 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:18.360 14:58:49 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:18.360 14:58:49 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:08:18.360 { 00:08:18.360 "subsystems": [ 00:08:18.360 { 00:08:18.360 "subsystem": "bdev", 00:08:18.360 "config": [ 00:08:18.360 { 00:08:18.360 "params": { 00:08:18.360 "trtype": "pcie", 00:08:18.360 "traddr": "0000:00:06.0", 00:08:18.360 "name": "Nvme0" 00:08:18.360 }, 00:08:18.360 "method": "bdev_nvme_attach_controller" 00:08:18.360 }, 00:08:18.360 { 00:08:18.360 "method": "bdev_wait_for_examine" 00:08:18.360 } 00:08:18.360 ] 00:08:18.360 } 00:08:18.360 ] 00:08:18.360 } 00:08:18.360 [2024-11-20 14:58:49.097573] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:18.360 [2024-11-20 14:58:49.097687] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69545 ] 00:08:18.618 [2024-11-20 14:58:49.238772] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.618 [2024-11-20 14:58:49.275581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.618 [2024-11-20 14:58:49.384504] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:08:18.618 [2024-11-20 14:58:49.384583] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:18.877 [2024-11-20 14:58:49.455556] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:18.877 14:58:49 -- common/autotest_common.sh@653 -- # es=234 00:08:18.877 14:58:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:18.877 14:58:49 -- common/autotest_common.sh@662 -- # es=106 00:08:18.877 14:58:49 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:18.877 14:58:49 -- common/autotest_common.sh@670 -- # es=1 00:08:18.877 14:58:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:18.877 00:08:18.877 real 0m0.498s 00:08:18.877 user 0m0.337s 00:08:18.877 sys 0m0.117s 00:08:18.877 14:58:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:18.877 14:58:49 -- common/autotest_common.sh@10 -- # set +x 00:08:18.877 ************************************ 00:08:18.877 END TEST dd_bs_lt_native_bs 00:08:18.877 ************************************ 00:08:18.877 14:58:49 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:08:18.877 14:58:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:18.877 14:58:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.877 14:58:49 -- common/autotest_common.sh@10 -- # set +x 00:08:18.877 ************************************ 00:08:18.877 START TEST dd_rw 00:08:18.877 ************************************ 00:08:18.877 14:58:49 -- common/autotest_common.sh@1114 -- # basic_rw 4096 00:08:18.877 14:58:49 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:08:18.877 14:58:49 -- dd/basic_rw.sh@12 -- # local count size 00:08:18.877 14:58:49 -- dd/basic_rw.sh@13 -- # local qds bss 00:08:18.877 14:58:49 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:08:18.877 14:58:49 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:18.877 14:58:49 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:18.877 14:58:49 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:18.877 14:58:49 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:18.877 14:58:49 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:08:18.877 14:58:49 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:08:18.877 14:58:49 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:18.877 14:58:49 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:18.877 14:58:49 -- dd/basic_rw.sh@23 -- # count=15 00:08:18.877 14:58:49 -- dd/basic_rw.sh@24 -- # count=15 00:08:18.877 14:58:49 -- dd/basic_rw.sh@25 -- # size=61440 00:08:18.877 14:58:49 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:18.877 14:58:49 -- dd/common.sh@98 -- # xtrace_disable 00:08:18.877 14:58:49 -- common/autotest_common.sh@10 -- # set +x 00:08:19.812 14:58:50 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:08:19.812 14:58:50 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:19.812 14:58:50 -- dd/common.sh@31 -- # xtrace_disable 00:08:19.812 14:58:50 -- common/autotest_common.sh@10 -- # set +x 00:08:19.812 [2024-11-20 14:58:50.320365] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:19.812 [2024-11-20 14:58:50.321111] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69587 ] 00:08:19.812 { 00:08:19.812 "subsystems": [ 00:08:19.812 { 00:08:19.812 "subsystem": "bdev", 00:08:19.812 "config": [ 00:08:19.812 { 00:08:19.812 "params": { 00:08:19.812 "trtype": "pcie", 00:08:19.812 "traddr": "0000:00:06.0", 00:08:19.812 "name": "Nvme0" 00:08:19.812 }, 00:08:19.812 "method": "bdev_nvme_attach_controller" 00:08:19.812 }, 00:08:19.812 { 00:08:19.812 "method": "bdev_wait_for_examine" 00:08:19.812 } 00:08:19.812 ] 00:08:19.812 } 00:08:19.812 ] 00:08:19.812 } 00:08:19.812 [2024-11-20 14:58:50.459432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.812 [2024-11-20 14:58:50.501025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.071  [2024-11-20T14:58:50.875Z] Copying: 60/60 [kB] (average 29 MBps) 00:08:20.071 00:08:20.071 14:58:50 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:08:20.071 14:58:50 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:20.071 14:58:50 -- dd/common.sh@31 -- # xtrace_disable 00:08:20.072 14:58:50 -- common/autotest_common.sh@10 -- # set +x 00:08:20.072 [2024-11-20 14:58:50.828854] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:20.072 [2024-11-20 14:58:50.828993] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69594 ] 00:08:20.072 { 00:08:20.072 "subsystems": [ 00:08:20.072 { 00:08:20.072 "subsystem": "bdev", 00:08:20.072 "config": [ 00:08:20.072 { 00:08:20.072 "params": { 00:08:20.072 "trtype": "pcie", 00:08:20.072 "traddr": "0000:00:06.0", 00:08:20.072 "name": "Nvme0" 00:08:20.072 }, 00:08:20.072 "method": "bdev_nvme_attach_controller" 00:08:20.072 }, 00:08:20.072 { 00:08:20.072 "method": "bdev_wait_for_examine" 00:08:20.072 } 00:08:20.072 ] 00:08:20.072 } 00:08:20.072 ] 00:08:20.072 } 00:08:20.330 [2024-11-20 14:58:50.967828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.330 [2024-11-20 14:58:51.005744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.330  [2024-11-20T14:58:51.393Z] Copying: 60/60 [kB] (average 29 MBps) 00:08:20.589 00:08:20.589 14:58:51 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:20.589 14:58:51 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:20.589 14:58:51 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:20.589 14:58:51 -- dd/common.sh@11 -- # local nvme_ref= 00:08:20.589 14:58:51 -- dd/common.sh@12 -- # local size=61440 00:08:20.589 14:58:51 -- dd/common.sh@14 -- # local bs=1048576 00:08:20.589 14:58:51 -- dd/common.sh@15 -- # local count=1 00:08:20.589 14:58:51 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:20.589 14:58:51 -- dd/common.sh@18 -- # gen_conf 00:08:20.589 14:58:51 -- dd/common.sh@31 -- # xtrace_disable 00:08:20.589 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:08:20.589 { 00:08:20.589 "subsystems": [ 00:08:20.589 { 00:08:20.589 "subsystem": "bdev", 00:08:20.589 "config": [ 00:08:20.589 { 00:08:20.589 "params": { 00:08:20.589 "trtype": "pcie", 00:08:20.589 "traddr": "0000:00:06.0", 00:08:20.589 "name": "Nvme0" 00:08:20.589 }, 00:08:20.589 "method": "bdev_nvme_attach_controller" 00:08:20.589 }, 00:08:20.589 { 00:08:20.589 "method": "bdev_wait_for_examine" 00:08:20.589 } 00:08:20.589 ] 00:08:20.589 } 00:08:20.589 ] 00:08:20.589 } 00:08:20.589 [2024-11-20 14:58:51.357075] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:20.589 [2024-11-20 14:58:51.357213] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69612 ] 00:08:20.847 [2024-11-20 14:58:51.500004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.847 [2024-11-20 14:58:51.540817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.105  [2024-11-20T14:58:51.909Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:21.105 00:08:21.105 14:58:51 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:21.105 14:58:51 -- dd/basic_rw.sh@23 -- # count=15 00:08:21.105 14:58:51 -- dd/basic_rw.sh@24 -- # count=15 00:08:21.105 14:58:51 -- dd/basic_rw.sh@25 -- # size=61440 00:08:21.105 14:58:51 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:08:21.105 14:58:51 -- dd/common.sh@98 -- # xtrace_disable 00:08:21.105 14:58:51 -- common/autotest_common.sh@10 -- # set +x 00:08:22.041 14:58:52 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:08:22.041 14:58:52 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:22.041 14:58:52 -- dd/common.sh@31 -- # xtrace_disable 00:08:22.041 14:58:52 -- common/autotest_common.sh@10 -- # set +x 00:08:22.041 [2024-11-20 14:58:52.558501] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:22.041 [2024-11-20 14:58:52.558665] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69631 ] 00:08:22.041 { 00:08:22.041 "subsystems": [ 00:08:22.041 { 00:08:22.041 "subsystem": "bdev", 00:08:22.041 "config": [ 00:08:22.041 { 00:08:22.041 "params": { 00:08:22.041 "trtype": "pcie", 00:08:22.041 "traddr": "0000:00:06.0", 00:08:22.041 "name": "Nvme0" 00:08:22.041 }, 00:08:22.041 "method": "bdev_nvme_attach_controller" 00:08:22.041 }, 00:08:22.041 { 00:08:22.041 "method": "bdev_wait_for_examine" 00:08:22.041 } 00:08:22.041 ] 00:08:22.041 } 00:08:22.041 ] 00:08:22.041 } 00:08:22.041 [2024-11-20 14:58:52.697295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.041 [2024-11-20 14:58:52.739733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.300  [2024-11-20T14:58:53.104Z] Copying: 60/60 [kB] (average 58 MBps) 00:08:22.300 00:08:22.300 14:58:53 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:22.300 14:58:53 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:08:22.300 14:58:53 -- dd/common.sh@31 -- # xtrace_disable 00:08:22.300 14:58:53 -- common/autotest_common.sh@10 -- # set +x 00:08:22.300 [2024-11-20 14:58:53.069050] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:22.300 [2024-11-20 14:58:53.069174] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69638 ] 00:08:22.300 { 00:08:22.300 "subsystems": [ 00:08:22.300 { 00:08:22.300 "subsystem": "bdev", 00:08:22.300 "config": [ 00:08:22.300 { 00:08:22.300 "params": { 00:08:22.300 "trtype": "pcie", 00:08:22.300 "traddr": "0000:00:06.0", 00:08:22.300 "name": "Nvme0" 00:08:22.300 }, 00:08:22.300 "method": "bdev_nvme_attach_controller" 00:08:22.300 }, 00:08:22.300 { 00:08:22.300 "method": "bdev_wait_for_examine" 00:08:22.300 } 00:08:22.300 ] 00:08:22.300 } 00:08:22.300 ] 00:08:22.300 } 00:08:22.558 [2024-11-20 14:58:53.200733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.558 [2024-11-20 14:58:53.243365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.816  [2024-11-20T14:58:53.620Z] Copying: 60/60 [kB] (average 29 MBps) 00:08:22.816 00:08:22.816 14:58:53 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:22.816 14:58:53 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:08:22.816 14:58:53 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:22.816 14:58:53 -- dd/common.sh@11 -- # local nvme_ref= 00:08:22.816 14:58:53 -- dd/common.sh@12 -- # local size=61440 00:08:22.816 14:58:53 -- dd/common.sh@14 -- # local bs=1048576 00:08:22.816 14:58:53 -- dd/common.sh@15 -- # local count=1 00:08:22.816 14:58:53 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:22.816 14:58:53 -- dd/common.sh@18 -- # gen_conf 00:08:22.816 14:58:53 -- dd/common.sh@31 -- # xtrace_disable 00:08:22.816 14:58:53 -- common/autotest_common.sh@10 -- # set +x 00:08:23.075 [2024-11-20 14:58:53.620422] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:23.075 [2024-11-20 14:58:53.620560] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69657 ] 00:08:23.075 { 00:08:23.075 "subsystems": [ 00:08:23.075 { 00:08:23.075 "subsystem": "bdev", 00:08:23.075 "config": [ 00:08:23.075 { 00:08:23.075 "params": { 00:08:23.075 "trtype": "pcie", 00:08:23.075 "traddr": "0000:00:06.0", 00:08:23.075 "name": "Nvme0" 00:08:23.075 }, 00:08:23.075 "method": "bdev_nvme_attach_controller" 00:08:23.075 }, 00:08:23.075 { 00:08:23.075 "method": "bdev_wait_for_examine" 00:08:23.075 } 00:08:23.075 ] 00:08:23.075 } 00:08:23.075 ] 00:08:23.075 } 00:08:23.075 [2024-11-20 14:58:53.758261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.075 [2024-11-20 14:58:53.800212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.333  [2024-11-20T14:58:54.137Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:23.333 00:08:23.333 14:58:54 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:23.333 14:58:54 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:23.333 14:58:54 -- dd/basic_rw.sh@23 -- # count=7 00:08:23.333 14:58:54 -- dd/basic_rw.sh@24 -- # count=7 00:08:23.333 14:58:54 -- dd/basic_rw.sh@25 -- # size=57344 00:08:23.333 14:58:54 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:23.333 14:58:54 -- dd/common.sh@98 -- # xtrace_disable 00:08:23.333 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:08:24.269 14:58:54 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:08:24.269 14:58:54 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:24.269 14:58:54 -- dd/common.sh@31 -- # xtrace_disable 00:08:24.269 14:58:54 -- common/autotest_common.sh@10 -- # set +x 00:08:24.269 [2024-11-20 14:58:54.928936] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:24.269 [2024-11-20 14:58:54.929080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69675 ] 00:08:24.269 { 00:08:24.269 "subsystems": [ 00:08:24.269 { 00:08:24.269 "subsystem": "bdev", 00:08:24.269 "config": [ 00:08:24.269 { 00:08:24.269 "params": { 00:08:24.269 "trtype": "pcie", 00:08:24.269 "traddr": "0000:00:06.0", 00:08:24.269 "name": "Nvme0" 00:08:24.269 }, 00:08:24.269 "method": "bdev_nvme_attach_controller" 00:08:24.269 }, 00:08:24.269 { 00:08:24.269 "method": "bdev_wait_for_examine" 00:08:24.269 } 00:08:24.269 ] 00:08:24.269 } 00:08:24.269 ] 00:08:24.269 } 00:08:24.269 [2024-11-20 14:58:55.068987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.529 [2024-11-20 14:58:55.111492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.529  [2024-11-20T14:58:55.593Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:24.789 00:08:24.789 14:58:55 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:24.789 14:58:55 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:08:24.789 14:58:55 -- dd/common.sh@31 -- # xtrace_disable 00:08:24.789 14:58:55 -- common/autotest_common.sh@10 -- # set +x 00:08:24.789 { 00:08:24.789 "subsystems": [ 00:08:24.789 { 00:08:24.789 "subsystem": "bdev", 00:08:24.789 "config": [ 00:08:24.789 { 00:08:24.789 "params": { 00:08:24.789 "trtype": "pcie", 00:08:24.789 "traddr": "0000:00:06.0", 00:08:24.789 "name": "Nvme0" 00:08:24.789 }, 00:08:24.789 "method": "bdev_nvme_attach_controller" 00:08:24.789 }, 00:08:24.789 { 00:08:24.789 "method": "bdev_wait_for_examine" 00:08:24.789 } 00:08:24.789 ] 00:08:24.789 } 00:08:24.789 ] 00:08:24.789 } 00:08:24.789 [2024-11-20 14:58:55.464675] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:24.789 [2024-11-20 14:58:55.464820] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69688 ] 00:08:25.048 [2024-11-20 14:58:55.606780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.048 [2024-11-20 14:58:55.642663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.048  [2024-11-20T14:58:56.110Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:25.306 00:08:25.306 14:58:55 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:25.306 14:58:55 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:25.306 14:58:55 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:25.306 14:58:55 -- dd/common.sh@11 -- # local nvme_ref= 00:08:25.306 14:58:55 -- dd/common.sh@12 -- # local size=57344 00:08:25.306 14:58:55 -- dd/common.sh@14 -- # local bs=1048576 00:08:25.306 14:58:55 -- dd/common.sh@15 -- # local count=1 00:08:25.306 14:58:55 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:25.306 14:58:55 -- dd/common.sh@18 -- # gen_conf 00:08:25.306 14:58:55 -- dd/common.sh@31 -- # xtrace_disable 00:08:25.306 14:58:55 -- common/autotest_common.sh@10 -- # set +x 00:08:25.306 [2024-11-20 14:58:55.995477] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:25.306 [2024-11-20 14:58:55.995940] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69701 ] 00:08:25.306 { 00:08:25.306 "subsystems": [ 00:08:25.306 { 00:08:25.306 "subsystem": "bdev", 00:08:25.306 "config": [ 00:08:25.306 { 00:08:25.306 "params": { 00:08:25.306 "trtype": "pcie", 00:08:25.306 "traddr": "0000:00:06.0", 00:08:25.306 "name": "Nvme0" 00:08:25.306 }, 00:08:25.306 "method": "bdev_nvme_attach_controller" 00:08:25.306 }, 00:08:25.306 { 00:08:25.306 "method": "bdev_wait_for_examine" 00:08:25.306 } 00:08:25.306 ] 00:08:25.306 } 00:08:25.306 ] 00:08:25.306 } 00:08:25.565 [2024-11-20 14:58:56.134424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.565 [2024-11-20 14:58:56.176068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.565  [2024-11-20T14:58:56.628Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:25.824 00:08:25.824 14:58:56 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:25.824 14:58:56 -- dd/basic_rw.sh@23 -- # count=7 00:08:25.824 14:58:56 -- dd/basic_rw.sh@24 -- # count=7 00:08:25.824 14:58:56 -- dd/basic_rw.sh@25 -- # size=57344 00:08:25.824 14:58:56 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:08:25.824 14:58:56 -- dd/common.sh@98 -- # xtrace_disable 00:08:25.824 14:58:56 -- common/autotest_common.sh@10 -- # set +x 00:08:26.758 14:58:57 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:08:26.758 14:58:57 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:26.758 14:58:57 -- dd/common.sh@31 -- # xtrace_disable 00:08:26.758 14:58:57 -- common/autotest_common.sh@10 -- # set +x 00:08:26.758 [2024-11-20 14:58:57.447633] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:26.758 [2024-11-20 14:58:57.448161] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69725 ] 00:08:26.758 { 00:08:26.758 "subsystems": [ 00:08:26.758 { 00:08:26.758 "subsystem": "bdev", 00:08:26.758 "config": [ 00:08:26.758 { 00:08:26.758 "params": { 00:08:26.758 "trtype": "pcie", 00:08:26.758 "traddr": "0000:00:06.0", 00:08:26.758 "name": "Nvme0" 00:08:26.758 }, 00:08:26.758 "method": "bdev_nvme_attach_controller" 00:08:26.758 }, 00:08:26.758 { 00:08:26.758 "method": "bdev_wait_for_examine" 00:08:26.758 } 00:08:26.758 ] 00:08:26.758 } 00:08:26.758 ] 00:08:26.758 } 00:08:27.017 [2024-11-20 14:58:57.595031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.017 [2024-11-20 14:58:57.639265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.017  [2024-11-20T14:58:58.081Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:27.277 00:08:27.277 14:58:57 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:08:27.277 14:58:57 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:27.277 14:58:57 -- dd/common.sh@31 -- # xtrace_disable 00:08:27.277 14:58:57 -- common/autotest_common.sh@10 -- # set +x 00:08:27.277 [2024-11-20 14:58:58.010001] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:27.277 [2024-11-20 14:58:58.010176] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69737 ] 00:08:27.277 { 00:08:27.277 "subsystems": [ 00:08:27.277 { 00:08:27.277 "subsystem": "bdev", 00:08:27.277 "config": [ 00:08:27.277 { 00:08:27.277 "params": { 00:08:27.277 "trtype": "pcie", 00:08:27.277 "traddr": "0000:00:06.0", 00:08:27.277 "name": "Nvme0" 00:08:27.277 }, 00:08:27.277 "method": "bdev_nvme_attach_controller" 00:08:27.277 }, 00:08:27.277 { 00:08:27.277 "method": "bdev_wait_for_examine" 00:08:27.277 } 00:08:27.277 ] 00:08:27.277 } 00:08:27.277 ] 00:08:27.277 } 00:08:27.536 [2024-11-20 14:58:58.148561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.536 [2024-11-20 14:58:58.192623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.536  [2024-11-20T14:58:58.598Z] Copying: 56/56 [kB] (average 54 MBps) 00:08:27.794 00:08:27.794 14:58:58 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:27.794 14:58:58 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:08:27.794 14:58:58 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:27.794 14:58:58 -- dd/common.sh@11 -- # local nvme_ref= 00:08:27.794 14:58:58 -- dd/common.sh@12 -- # local size=57344 00:08:27.794 14:58:58 -- dd/common.sh@14 -- # local bs=1048576 00:08:27.794 14:58:58 -- dd/common.sh@15 -- # local count=1 00:08:27.794 14:58:58 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:27.794 14:58:58 -- dd/common.sh@18 -- # gen_conf 00:08:27.794 14:58:58 -- dd/common.sh@31 -- # xtrace_disable 00:08:27.794 14:58:58 -- common/autotest_common.sh@10 -- # set +x 00:08:27.794 [2024-11-20 14:58:58.530231] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:27.794 [2024-11-20 14:58:58.530346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69751 ] 00:08:27.794 { 00:08:27.794 "subsystems": [ 00:08:27.794 { 00:08:27.794 "subsystem": "bdev", 00:08:27.794 "config": [ 00:08:27.794 { 00:08:27.794 "params": { 00:08:27.794 "trtype": "pcie", 00:08:27.794 "traddr": "0000:00:06.0", 00:08:27.794 "name": "Nvme0" 00:08:27.794 }, 00:08:27.794 "method": "bdev_nvme_attach_controller" 00:08:27.794 }, 00:08:27.794 { 00:08:27.794 "method": "bdev_wait_for_examine" 00:08:27.794 } 00:08:27.794 ] 00:08:27.794 } 00:08:27.794 ] 00:08:27.794 } 00:08:28.053 [2024-11-20 14:58:58.660241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.053 [2024-11-20 14:58:58.704299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.053  [2024-11-20T14:58:59.116Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:28.312 00:08:28.312 14:58:59 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:08:28.312 14:58:59 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:28.312 14:58:59 -- dd/basic_rw.sh@23 -- # count=3 00:08:28.312 14:58:59 -- dd/basic_rw.sh@24 -- # count=3 00:08:28.312 14:58:59 -- dd/basic_rw.sh@25 -- # size=49152 00:08:28.312 14:58:59 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:28.312 14:58:59 -- dd/common.sh@98 -- # xtrace_disable 00:08:28.312 14:58:59 -- common/autotest_common.sh@10 -- # set +x 00:08:28.882 14:58:59 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:28.882 14:58:59 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:08:28.882 14:58:59 -- dd/common.sh@31 -- # xtrace_disable 00:08:28.882 14:58:59 -- common/autotest_common.sh@10 -- # set +x 00:08:28.882 [2024-11-20 14:58:59.543064] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:28.882 [2024-11-20 14:58:59.543212] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69769 ] 00:08:28.882 { 00:08:28.882 "subsystems": [ 00:08:28.882 { 00:08:28.882 "subsystem": "bdev", 00:08:28.882 "config": [ 00:08:28.882 { 00:08:28.882 "params": { 00:08:28.882 "trtype": "pcie", 00:08:28.882 "traddr": "0000:00:06.0", 00:08:28.882 "name": "Nvme0" 00:08:28.882 }, 00:08:28.882 "method": "bdev_nvme_attach_controller" 00:08:28.882 }, 00:08:28.882 { 00:08:28.882 "method": "bdev_wait_for_examine" 00:08:28.882 } 00:08:28.882 ] 00:08:28.882 } 00:08:28.882 ] 00:08:28.882 } 00:08:28.882 [2024-11-20 14:58:59.681494] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.141 [2024-11-20 14:58:59.723890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.141  [2024-11-20T14:59:00.203Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:29.399 00:08:29.399 14:59:00 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:08:29.399 14:59:00 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:29.399 14:59:00 -- dd/common.sh@31 -- # xtrace_disable 00:08:29.399 14:59:00 -- common/autotest_common.sh@10 -- # set +x 00:08:29.399 [2024-11-20 14:59:00.091238] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:29.399 [2024-11-20 14:59:00.091390] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69781 ] 00:08:29.399 { 00:08:29.399 "subsystems": [ 00:08:29.399 { 00:08:29.399 "subsystem": "bdev", 00:08:29.399 "config": [ 00:08:29.399 { 00:08:29.399 "params": { 00:08:29.399 "trtype": "pcie", 00:08:29.399 "traddr": "0000:00:06.0", 00:08:29.399 "name": "Nvme0" 00:08:29.399 }, 00:08:29.399 "method": "bdev_nvme_attach_controller" 00:08:29.399 }, 00:08:29.399 { 00:08:29.399 "method": "bdev_wait_for_examine" 00:08:29.399 } 00:08:29.399 ] 00:08:29.399 } 00:08:29.399 ] 00:08:29.399 } 00:08:29.658 [2024-11-20 14:59:00.228754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.658 [2024-11-20 14:59:00.269709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.658  [2024-11-20T14:59:00.720Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:29.916 00:08:29.916 14:59:00 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:29.916 14:59:00 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:29.916 14:59:00 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:29.916 14:59:00 -- dd/common.sh@11 -- # local nvme_ref= 00:08:29.916 14:59:00 -- dd/common.sh@12 -- # local size=49152 00:08:29.916 14:59:00 -- dd/common.sh@14 -- # local bs=1048576 00:08:29.916 14:59:00 -- dd/common.sh@15 -- # local count=1 00:08:29.916 14:59:00 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:29.916 14:59:00 -- dd/common.sh@18 -- # gen_conf 00:08:29.916 14:59:00 -- dd/common.sh@31 -- # xtrace_disable 00:08:29.916 14:59:00 -- common/autotest_common.sh@10 -- # set +x 00:08:29.916 [2024-11-20 14:59:00.615720] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:29.916 [2024-11-20 14:59:00.615855] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69795 ] 00:08:29.916 { 00:08:29.916 "subsystems": [ 00:08:29.916 { 00:08:29.916 "subsystem": "bdev", 00:08:29.916 "config": [ 00:08:29.916 { 00:08:29.916 "params": { 00:08:29.916 "trtype": "pcie", 00:08:29.916 "traddr": "0000:00:06.0", 00:08:29.916 "name": "Nvme0" 00:08:29.916 }, 00:08:29.916 "method": "bdev_nvme_attach_controller" 00:08:29.916 }, 00:08:29.916 { 00:08:29.916 "method": "bdev_wait_for_examine" 00:08:29.916 } 00:08:29.916 ] 00:08:29.916 } 00:08:29.916 ] 00:08:29.916 } 00:08:30.175 [2024-11-20 14:59:00.749211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.175 [2024-11-20 14:59:00.792564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.175  [2024-11-20T14:59:01.238Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:30.434 00:08:30.434 14:59:01 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:30.434 14:59:01 -- dd/basic_rw.sh@23 -- # count=3 00:08:30.434 14:59:01 -- dd/basic_rw.sh@24 -- # count=3 00:08:30.434 14:59:01 -- dd/basic_rw.sh@25 -- # size=49152 00:08:30.434 14:59:01 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:30.434 14:59:01 -- dd/common.sh@98 -- # xtrace_disable 00:08:30.434 14:59:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.000 14:59:01 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:31.000 14:59:01 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:31.000 14:59:01 -- dd/common.sh@31 -- # xtrace_disable 00:08:31.000 14:59:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.000 [2024-11-20 14:59:01.708921] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:31.000 [2024-11-20 14:59:01.709482] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69813 ] 00:08:31.000 { 00:08:31.000 "subsystems": [ 00:08:31.000 { 00:08:31.000 "subsystem": "bdev", 00:08:31.000 "config": [ 00:08:31.000 { 00:08:31.000 "params": { 00:08:31.000 "trtype": "pcie", 00:08:31.000 "traddr": "0000:00:06.0", 00:08:31.000 "name": "Nvme0" 00:08:31.000 }, 00:08:31.000 "method": "bdev_nvme_attach_controller" 00:08:31.000 }, 00:08:31.000 { 00:08:31.000 "method": "bdev_wait_for_examine" 00:08:31.000 } 00:08:31.000 ] 00:08:31.000 } 00:08:31.000 ] 00:08:31.000 } 00:08:31.258 [2024-11-20 14:59:01.847936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.258 [2024-11-20 14:59:01.889370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.258  [2024-11-20T14:59:02.321Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:31.517 00:08:31.517 14:59:02 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:31.517 14:59:02 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:31.517 14:59:02 -- dd/common.sh@31 -- # xtrace_disable 00:08:31.517 14:59:02 -- common/autotest_common.sh@10 -- # set +x 00:08:31.517 [2024-11-20 14:59:02.245357] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:31.517 [2024-11-20 14:59:02.245507] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69825 ] 00:08:31.517 { 00:08:31.517 "subsystems": [ 00:08:31.517 { 00:08:31.517 "subsystem": "bdev", 00:08:31.517 "config": [ 00:08:31.517 { 00:08:31.517 "params": { 00:08:31.517 "trtype": "pcie", 00:08:31.517 "traddr": "0000:00:06.0", 00:08:31.517 "name": "Nvme0" 00:08:31.517 }, 00:08:31.517 "method": "bdev_nvme_attach_controller" 00:08:31.517 }, 00:08:31.517 { 00:08:31.517 "method": "bdev_wait_for_examine" 00:08:31.517 } 00:08:31.517 ] 00:08:31.517 } 00:08:31.517 ] 00:08:31.517 } 00:08:31.776 [2024-11-20 14:59:02.389472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.776 [2024-11-20 14:59:02.427028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.776  [2024-11-20T14:59:02.837Z] Copying: 48/48 [kB] (average 46 MBps) 00:08:32.034 00:08:32.034 14:59:02 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:32.034 14:59:02 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:32.034 14:59:02 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:32.034 14:59:02 -- dd/common.sh@11 -- # local nvme_ref= 00:08:32.034 14:59:02 -- dd/common.sh@12 -- # local size=49152 00:08:32.034 14:59:02 -- dd/common.sh@14 -- # local bs=1048576 00:08:32.034 14:59:02 -- dd/common.sh@15 -- # local count=1 00:08:32.034 14:59:02 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:32.034 14:59:02 -- dd/common.sh@18 -- # gen_conf 00:08:32.034 14:59:02 -- dd/common.sh@31 -- # xtrace_disable 00:08:32.034 14:59:02 -- common/autotest_common.sh@10 -- # set +x 00:08:32.034 [2024-11-20 14:59:02.785793] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:32.034 [2024-11-20 14:59:02.785923] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69839 ] 00:08:32.034 { 00:08:32.034 "subsystems": [ 00:08:32.034 { 00:08:32.034 "subsystem": "bdev", 00:08:32.034 "config": [ 00:08:32.034 { 00:08:32.034 "params": { 00:08:32.034 "trtype": "pcie", 00:08:32.034 "traddr": "0000:00:06.0", 00:08:32.034 "name": "Nvme0" 00:08:32.034 }, 00:08:32.034 "method": "bdev_nvme_attach_controller" 00:08:32.034 }, 00:08:32.034 { 00:08:32.034 "method": "bdev_wait_for_examine" 00:08:32.034 } 00:08:32.034 ] 00:08:32.034 } 00:08:32.034 ] 00:08:32.034 } 00:08:32.292 [2024-11-20 14:59:02.921893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.292 [2024-11-20 14:59:02.963411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.292  [2024-11-20T14:59:03.355Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:32.551 00:08:32.551 00:08:32.551 real 0m13.705s 00:08:32.551 user 0m10.062s 00:08:32.551 sys 0m2.499s 00:08:32.551 14:59:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:32.551 ************************************ 00:08:32.551 END TEST dd_rw 00:08:32.551 14:59:03 -- common/autotest_common.sh@10 -- # set +x 00:08:32.551 ************************************ 00:08:32.551 14:59:03 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:32.551 14:59:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:32.551 14:59:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.551 14:59:03 -- common/autotest_common.sh@10 -- # set +x 00:08:32.551 ************************************ 00:08:32.551 START TEST dd_rw_offset 00:08:32.551 ************************************ 00:08:32.551 14:59:03 -- common/autotest_common.sh@1114 -- # basic_offset 00:08:32.551 14:59:03 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:32.551 14:59:03 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:32.551 14:59:03 -- dd/common.sh@98 -- # xtrace_disable 00:08:32.551 14:59:03 -- common/autotest_common.sh@10 -- # set +x 00:08:32.809 14:59:03 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:32.810 14:59:03 -- dd/basic_rw.sh@56 -- # data=1laib6zmtwnqphy64fqdsriwzqcoe1jmxw5p3k9ej2w5wl7srxba5e4wu8wg9m9z4lv0xpklw2uqb0dmb5o1ewlb84mh5r3zs29aonqa4qgdz1873jbgfzk8ir9pyi82k0ruhujyhcri0kgftrtab0bl45jx39la9jgcch5q7kge6ay0hjl5bjuzfr6svnis9sg7mze2ap1kg51xtpxrxnamimt1t87cgosv5wma04ckbjsukaibl1fx1yql482drt98mz1dmt1he7d3zsu3whr4a0rnn32kz5c9384qxplsqfhp1lmorr0e0wbwzz15prkifhhb0ybdus45rz1ztiktzw3g358cp3etg0b5ybk654ll7yokriidmnveck1iycuuhl4l1mjxaj2lmu5lqd2ff5uixghlazlhi92f5odx0eqhb6p3i3xo9f8b8k7yc6wyoqqduzlh67mtdw4wnbw6u7qbol1ynhuk9fypn6z551hj84t6q4cgkepx6pld5fkaku2t6vnqcwgicl2n2tgj1u6ngowj5v8oxkemmpxm8q9om297k4ziclpzrkf6h4ax1cdxm69lyqv5e07x9jyc026ufxblf9kgj1m03fzt0j6mmhnzp2flphwmt40qpnbqnt4njnediz34jt5rgb44bbzwh0t1enlyqx9l1hmhmymp5xtj52r2m8hp9acg4hjo9fo5nvxvsvhpfhxj7evueerodd750z76e1xqymsooqax0h3patf7vbcz0xxx5k53vcet1h4ll1ui54g9fkqmtxrjpnzcs1czf48a3uu39fri0h4zo4t27wxnurspcpvrwgout7u9hkrch99um1j914ttjemcww2b7u40kffupol07flco8rsw691ufmbd334uy6d6n2kbnkbi5w2iejadhopsu9o8kbcya83gqj1eh4v2kh6szi3dbquihtp32dzpbxepz0n3ty9ao1m4lxsvslq3y7ss7vmwe43i9a7pikndbu9879pc10yjqww6biwygjce9c0aolox8bf0rq1lwq7rsjab7uoceuszoicbzeeeg7xlsiexodedyomr22j56wbj51d1wy8k9igqqcy8vso8ptybksl3hvn4wdd82jip2tnjsiw7magcahwpiogrz9u4ahcmsqybje0vtgstqb106577gl16hbpau69doaayjckm0b88ldivwmm04c9h6znp6zt4c3bz7l56zqvwupni7n2y464o8u95ffyf2yeq62ckkod0ztj51cuvyona1ra4nx4jq66ce2muc36npjg0f5u3hmariffnzyccl3kjcf230lhi9txiu3bpko5r41g5albxxmmy9x8u3aer3n1xu5qdro73k7ohvz4d882wctmnte2bb23y3nrn50nrh2nq0x0ssfbbmq5r8epcz2k6is7100402f296midm8r2crpf9fstg3tr1a0y2n0lflz5d01mzrg2ues0gca2zgpuhqtca2byjbntminqs3uk206n5eowgwf4gavpeja9lgiedk2w8puf6m1l5ujmp3jdg72vgjfyeitjda3zovu5wdyayoavhbpb7lcnr4wd3f238px7bcqjpgkhmzkx30ytcvkm08iwx0319t2y7otdtyokqijz8xwkez05ifxq2ujtlbn9bqer603yro9guuq5xxfkb9bmrv4o08rbrthn6xa3a2etcftqus4t2tfbeawisjh2laa12h49kxo2v3miaxxfh0zv3rrghdkpipsljkho899tv05lonj3k7i2yo3f6wydf185gid46dgsoe5o38edjsaodbkcx18g1asq2pbvsuimb8t0cpquakp7knnm2e0eo2cjcyn6hxm0vrbld8gkn3b3p5zapqbdls0yyv7sidmf11qxhh9jn236t5g8oxumzysyku14xq7mcca74enas0r7lg346yh76npy5ua8x0w9bczhwio5hd78n03jc3aqq1afr3eweq5v9njgenz32raclsb1x7f08iv1v5strl50i7g2f8jhsjgnsl27r854vltdt4kwo5lh4ie5spbypbi1d3xwumlaltvfzh5vnmx1g7hvn8tv2abh4avdnu1il4oc6m6706u2d6nk2y8ekx5e0oipkxhiwip44ojc71k4rj18hu2gcvu8c134aoyjedd4jsm4o4zooc2ji4o0ia5hhr1x9fclubm0w17f1mszumh69oldp5h66pl6gmkkehwfpwo1igcs69qtrginl21wlyny1l7x2a973bkl3q49uc1gbgrqpz8pmet4cafwa7eizuy7acmgjirv1oz7rcfcr7imaz7n0l2sm1kdovb2dy2fdp814fn3yc7wk2tu9tow9wwm8oxxbj9gk6127icgojbzj18vv1855mg8pw54w0vm2z6sevu0bbv9tuub42hmakyg279t8n3p1ggir45ii13ks9aa5monzrfc51z7t8ezppgd2e3l0u6fks159g81dbkvhyugzld7jk2oqmnhzsprea43gu0420vrcc1uyv03ydfpnw74mjckshmokvyulo9np4d204nm7bhjhvp14nwf7terksoy11n7yhv47tohe5f49kbdscvk8z4ayx645d7rcxjtsa1anysiuy44ndj723komaqs4ljs7ax6dejvhb4fo0arq21ib1gbk34siugjfib792u1bod9gzihmm6lfp1c2ruv6szi8vwjv8sxnunxr2yhpdxf02erbdj7ezutbx3268imzkt428ls6nnlsaqtq2qc2vxx5rgulyw9sp49dwu4jezv09elgaaeuwacycrfovh12naji55tvdihj0zvw099u7p2e0yo88tw4curahfyybvpinjxbhv5avthd6adv2vdhnwt0izo4jncotmhzjanfulkmcuixkhwxb5r3qq64p0lolk6lnw3fldmt2j5yezoi503s4ji22363zv9pv2vliweh83bhiohc18jgpnip9pwdrc8nsr9zyy4s1e8j34ag03aktxc7qq1mch7n7mpsrr04hlgspzaajnp6hnu49nz4gx2mde96jsbuejgbtpnshgudwcm2pg0wwppduoxqvk3wjcfu8konfrix4ahy2odkr65ft1h9d8zwy7oyxeavx41t8gwlor0n1rfr4gmla126piq1c9777xb0xdz2idflhfi3f0b1wf3j8n04j3njku5rgvwbefscqvvxmc7kxpj4c1et4jgbjkzufqjdqu81o0e3zqsi03fnlh0pthnz1ntwbii9mpgrdzo2mwc4qzk4hipjo2q0ubjbtu5xflmhim37kfsgyk0vkzj9si45ed35rvmii4h59q6ygppcv8uw3ndxkdv6nw9gvup85iafjzh5gtcs27ieuhwr0zi2sa65zqlqxg5uro4yfkhsce11jn2oxt8u705d0uwlr4t70elcs061au4jrnt4cbrgq1kcoh8s2nxhqrwbpbtcl5h44xkl2wyy31oti0io1x43ktb2xdorw2i9zvdtw30u3k72w42fn8xrabd12fkm6521bvps214bs2eu5hlipbstcte0plrkz806vtwqes6a5nas6cwt5v3lx7wwdw8jmukuyfzi1kajge77o9lcwbqegfz4juuhjxcftqkuu6vz154iw0xrua23q94t5ui14wj82m3bg1f9fpr6n3j2e1ij9rz7c5mb8tb92ciez2ctucq2kwobeocwj5gfvpk8vpwgi7n1z83szcww4msxdiomsg31ze9agpxz32btfprmjkfq6ykpw4n7g44wtye1r4zcgq0oz1u9423yem0k96vo5smoy4ewxn5kowgoeski684sey7syy6se79hyyd9522393ki3kf09225vh9jfwqtcmwhi7z0gymhqtnkxabd69y3e7tw5acenmyk7qmlqq5mmaxp99ndswm4861bgi72xco08qf65q4r9zqlomj127wz70z49qxu1drysyaxh088chvkatex44elmeobsckcb277lyausm79smh5xce426hao2g5stt4rlclf6dn7718y3k42y5278 00:08:32.810 14:59:03 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:32.810 14:59:03 -- dd/basic_rw.sh@59 -- # gen_conf 00:08:32.810 14:59:03 -- dd/common.sh@31 -- # xtrace_disable 00:08:32.810 14:59:03 -- common/autotest_common.sh@10 -- # set +x 00:08:32.810 [2024-11-20 14:59:03.417186] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:32.810 [2024-11-20 14:59:03.417557] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69874 ] 00:08:32.810 { 00:08:32.810 "subsystems": [ 00:08:32.810 { 00:08:32.810 "subsystem": "bdev", 00:08:32.810 "config": [ 00:08:32.810 { 00:08:32.810 "params": { 00:08:32.810 "trtype": "pcie", 00:08:32.810 "traddr": "0000:00:06.0", 00:08:32.810 "name": "Nvme0" 00:08:32.810 }, 00:08:32.810 "method": "bdev_nvme_attach_controller" 00:08:32.810 }, 00:08:32.810 { 00:08:32.810 "method": "bdev_wait_for_examine" 00:08:32.810 } 00:08:32.810 ] 00:08:32.810 } 00:08:32.810 ] 00:08:32.810 } 00:08:32.810 [2024-11-20 14:59:03.551488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.810 [2024-11-20 14:59:03.589661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.068  [2024-11-20T14:59:04.131Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:33.327 00:08:33.327 14:59:03 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:33.327 14:59:03 -- dd/basic_rw.sh@65 -- # gen_conf 00:08:33.327 14:59:03 -- dd/common.sh@31 -- # xtrace_disable 00:08:33.327 14:59:03 -- common/autotest_common.sh@10 -- # set +x 00:08:33.327 [2024-11-20 14:59:03.962254] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:33.327 [2024-11-20 14:59:03.962421] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69881 ] 00:08:33.327 { 00:08:33.327 "subsystems": [ 00:08:33.327 { 00:08:33.327 "subsystem": "bdev", 00:08:33.327 "config": [ 00:08:33.327 { 00:08:33.327 "params": { 00:08:33.327 "trtype": "pcie", 00:08:33.327 "traddr": "0000:00:06.0", 00:08:33.327 "name": "Nvme0" 00:08:33.327 }, 00:08:33.327 "method": "bdev_nvme_attach_controller" 00:08:33.327 }, 00:08:33.327 { 00:08:33.327 "method": "bdev_wait_for_examine" 00:08:33.327 } 00:08:33.327 ] 00:08:33.327 } 00:08:33.327 ] 00:08:33.327 } 00:08:33.327 [2024-11-20 14:59:04.102347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.587 [2024-11-20 14:59:04.144896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.587  [2024-11-20T14:59:04.650Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:33.846 00:08:33.846 14:59:04 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:33.846 14:59:04 -- dd/basic_rw.sh@72 -- # [[ 1laib6zmtwnqphy64fqdsriwzqcoe1jmxw5p3k9ej2w5wl7srxba5e4wu8wg9m9z4lv0xpklw2uqb0dmb5o1ewlb84mh5r3zs29aonqa4qgdz1873jbgfzk8ir9pyi82k0ruhujyhcri0kgftrtab0bl45jx39la9jgcch5q7kge6ay0hjl5bjuzfr6svnis9sg7mze2ap1kg51xtpxrxnamimt1t87cgosv5wma04ckbjsukaibl1fx1yql482drt98mz1dmt1he7d3zsu3whr4a0rnn32kz5c9384qxplsqfhp1lmorr0e0wbwzz15prkifhhb0ybdus45rz1ztiktzw3g358cp3etg0b5ybk654ll7yokriidmnveck1iycuuhl4l1mjxaj2lmu5lqd2ff5uixghlazlhi92f5odx0eqhb6p3i3xo9f8b8k7yc6wyoqqduzlh67mtdw4wnbw6u7qbol1ynhuk9fypn6z551hj84t6q4cgkepx6pld5fkaku2t6vnqcwgicl2n2tgj1u6ngowj5v8oxkemmpxm8q9om297k4ziclpzrkf6h4ax1cdxm69lyqv5e07x9jyc026ufxblf9kgj1m03fzt0j6mmhnzp2flphwmt40qpnbqnt4njnediz34jt5rgb44bbzwh0t1enlyqx9l1hmhmymp5xtj52r2m8hp9acg4hjo9fo5nvxvsvhpfhxj7evueerodd750z76e1xqymsooqax0h3patf7vbcz0xxx5k53vcet1h4ll1ui54g9fkqmtxrjpnzcs1czf48a3uu39fri0h4zo4t27wxnurspcpvrwgout7u9hkrch99um1j914ttjemcww2b7u40kffupol07flco8rsw691ufmbd334uy6d6n2kbnkbi5w2iejadhopsu9o8kbcya83gqj1eh4v2kh6szi3dbquihtp32dzpbxepz0n3ty9ao1m4lxsvslq3y7ss7vmwe43i9a7pikndbu9879pc10yjqww6biwygjce9c0aolox8bf0rq1lwq7rsjab7uoceuszoicbzeeeg7xlsiexodedyomr22j56wbj51d1wy8k9igqqcy8vso8ptybksl3hvn4wdd82jip2tnjsiw7magcahwpiogrz9u4ahcmsqybje0vtgstqb106577gl16hbpau69doaayjckm0b88ldivwmm04c9h6znp6zt4c3bz7l56zqvwupni7n2y464o8u95ffyf2yeq62ckkod0ztj51cuvyona1ra4nx4jq66ce2muc36npjg0f5u3hmariffnzyccl3kjcf230lhi9txiu3bpko5r41g5albxxmmy9x8u3aer3n1xu5qdro73k7ohvz4d882wctmnte2bb23y3nrn50nrh2nq0x0ssfbbmq5r8epcz2k6is7100402f296midm8r2crpf9fstg3tr1a0y2n0lflz5d01mzrg2ues0gca2zgpuhqtca2byjbntminqs3uk206n5eowgwf4gavpeja9lgiedk2w8puf6m1l5ujmp3jdg72vgjfyeitjda3zovu5wdyayoavhbpb7lcnr4wd3f238px7bcqjpgkhmzkx30ytcvkm08iwx0319t2y7otdtyokqijz8xwkez05ifxq2ujtlbn9bqer603yro9guuq5xxfkb9bmrv4o08rbrthn6xa3a2etcftqus4t2tfbeawisjh2laa12h49kxo2v3miaxxfh0zv3rrghdkpipsljkho899tv05lonj3k7i2yo3f6wydf185gid46dgsoe5o38edjsaodbkcx18g1asq2pbvsuimb8t0cpquakp7knnm2e0eo2cjcyn6hxm0vrbld8gkn3b3p5zapqbdls0yyv7sidmf11qxhh9jn236t5g8oxumzysyku14xq7mcca74enas0r7lg346yh76npy5ua8x0w9bczhwio5hd78n03jc3aqq1afr3eweq5v9njgenz32raclsb1x7f08iv1v5strl50i7g2f8jhsjgnsl27r854vltdt4kwo5lh4ie5spbypbi1d3xwumlaltvfzh5vnmx1g7hvn8tv2abh4avdnu1il4oc6m6706u2d6nk2y8ekx5e0oipkxhiwip44ojc71k4rj18hu2gcvu8c134aoyjedd4jsm4o4zooc2ji4o0ia5hhr1x9fclubm0w17f1mszumh69oldp5h66pl6gmkkehwfpwo1igcs69qtrginl21wlyny1l7x2a973bkl3q49uc1gbgrqpz8pmet4cafwa7eizuy7acmgjirv1oz7rcfcr7imaz7n0l2sm1kdovb2dy2fdp814fn3yc7wk2tu9tow9wwm8oxxbj9gk6127icgojbzj18vv1855mg8pw54w0vm2z6sevu0bbv9tuub42hmakyg279t8n3p1ggir45ii13ks9aa5monzrfc51z7t8ezppgd2e3l0u6fks159g81dbkvhyugzld7jk2oqmnhzsprea43gu0420vrcc1uyv03ydfpnw74mjckshmokvyulo9np4d204nm7bhjhvp14nwf7terksoy11n7yhv47tohe5f49kbdscvk8z4ayx645d7rcxjtsa1anysiuy44ndj723komaqs4ljs7ax6dejvhb4fo0arq21ib1gbk34siugjfib792u1bod9gzihmm6lfp1c2ruv6szi8vwjv8sxnunxr2yhpdxf02erbdj7ezutbx3268imzkt428ls6nnlsaqtq2qc2vxx5rgulyw9sp49dwu4jezv09elgaaeuwacycrfovh12naji55tvdihj0zvw099u7p2e0yo88tw4curahfyybvpinjxbhv5avthd6adv2vdhnwt0izo4jncotmhzjanfulkmcuixkhwxb5r3qq64p0lolk6lnw3fldmt2j5yezoi503s4ji22363zv9pv2vliweh83bhiohc18jgpnip9pwdrc8nsr9zyy4s1e8j34ag03aktxc7qq1mch7n7mpsrr04hlgspzaajnp6hnu49nz4gx2mde96jsbuejgbtpnshgudwcm2pg0wwppduoxqvk3wjcfu8konfrix4ahy2odkr65ft1h9d8zwy7oyxeavx41t8gwlor0n1rfr4gmla126piq1c9777xb0xdz2idflhfi3f0b1wf3j8n04j3njku5rgvwbefscqvvxmc7kxpj4c1et4jgbjkzufqjdqu81o0e3zqsi03fnlh0pthnz1ntwbii9mpgrdzo2mwc4qzk4hipjo2q0ubjbtu5xflmhim37kfsgyk0vkzj9si45ed35rvmii4h59q6ygppcv8uw3ndxkdv6nw9gvup85iafjzh5gtcs27ieuhwr0zi2sa65zqlqxg5uro4yfkhsce11jn2oxt8u705d0uwlr4t70elcs061au4jrnt4cbrgq1kcoh8s2nxhqrwbpbtcl5h44xkl2wyy31oti0io1x43ktb2xdorw2i9zvdtw30u3k72w42fn8xrabd12fkm6521bvps214bs2eu5hlipbstcte0plrkz806vtwqes6a5nas6cwt5v3lx7wwdw8jmukuyfzi1kajge77o9lcwbqegfz4juuhjxcftqkuu6vz154iw0xrua23q94t5ui14wj82m3bg1f9fpr6n3j2e1ij9rz7c5mb8tb92ciez2ctucq2kwobeocwj5gfvpk8vpwgi7n1z83szcww4msxdiomsg31ze9agpxz32btfprmjkfq6ykpw4n7g44wtye1r4zcgq0oz1u9423yem0k96vo5smoy4ewxn5kowgoeski684sey7syy6se79hyyd9522393ki3kf09225vh9jfwqtcmwhi7z0gymhqtnkxabd69y3e7tw5acenmyk7qmlqq5mmaxp99ndswm4861bgi72xco08qf65q4r9zqlomj127wz70z49qxu1drysyaxh088chvkatex44elmeobsckcb277lyausm79smh5xce426hao2g5stt4rlclf6dn7718y3k42y5278 == \1\l\a\i\b\6\z\m\t\w\n\q\p\h\y\6\4\f\q\d\s\r\i\w\z\q\c\o\e\1\j\m\x\w\5\p\3\k\9\e\j\2\w\5\w\l\7\s\r\x\b\a\5\e\4\w\u\8\w\g\9\m\9\z\4\l\v\0\x\p\k\l\w\2\u\q\b\0\d\m\b\5\o\1\e\w\l\b\8\4\m\h\5\r\3\z\s\2\9\a\o\n\q\a\4\q\g\d\z\1\8\7\3\j\b\g\f\z\k\8\i\r\9\p\y\i\8\2\k\0\r\u\h\u\j\y\h\c\r\i\0\k\g\f\t\r\t\a\b\0\b\l\4\5\j\x\3\9\l\a\9\j\g\c\c\h\5\q\7\k\g\e\6\a\y\0\h\j\l\5\b\j\u\z\f\r\6\s\v\n\i\s\9\s\g\7\m\z\e\2\a\p\1\k\g\5\1\x\t\p\x\r\x\n\a\m\i\m\t\1\t\8\7\c\g\o\s\v\5\w\m\a\0\4\c\k\b\j\s\u\k\a\i\b\l\1\f\x\1\y\q\l\4\8\2\d\r\t\9\8\m\z\1\d\m\t\1\h\e\7\d\3\z\s\u\3\w\h\r\4\a\0\r\n\n\3\2\k\z\5\c\9\3\8\4\q\x\p\l\s\q\f\h\p\1\l\m\o\r\r\0\e\0\w\b\w\z\z\1\5\p\r\k\i\f\h\h\b\0\y\b\d\u\s\4\5\r\z\1\z\t\i\k\t\z\w\3\g\3\5\8\c\p\3\e\t\g\0\b\5\y\b\k\6\5\4\l\l\7\y\o\k\r\i\i\d\m\n\v\e\c\k\1\i\y\c\u\u\h\l\4\l\1\m\j\x\a\j\2\l\m\u\5\l\q\d\2\f\f\5\u\i\x\g\h\l\a\z\l\h\i\9\2\f\5\o\d\x\0\e\q\h\b\6\p\3\i\3\x\o\9\f\8\b\8\k\7\y\c\6\w\y\o\q\q\d\u\z\l\h\6\7\m\t\d\w\4\w\n\b\w\6\u\7\q\b\o\l\1\y\n\h\u\k\9\f\y\p\n\6\z\5\5\1\h\j\8\4\t\6\q\4\c\g\k\e\p\x\6\p\l\d\5\f\k\a\k\u\2\t\6\v\n\q\c\w\g\i\c\l\2\n\2\t\g\j\1\u\6\n\g\o\w\j\5\v\8\o\x\k\e\m\m\p\x\m\8\q\9\o\m\2\9\7\k\4\z\i\c\l\p\z\r\k\f\6\h\4\a\x\1\c\d\x\m\6\9\l\y\q\v\5\e\0\7\x\9\j\y\c\0\2\6\u\f\x\b\l\f\9\k\g\j\1\m\0\3\f\z\t\0\j\6\m\m\h\n\z\p\2\f\l\p\h\w\m\t\4\0\q\p\n\b\q\n\t\4\n\j\n\e\d\i\z\3\4\j\t\5\r\g\b\4\4\b\b\z\w\h\0\t\1\e\n\l\y\q\x\9\l\1\h\m\h\m\y\m\p\5\x\t\j\5\2\r\2\m\8\h\p\9\a\c\g\4\h\j\o\9\f\o\5\n\v\x\v\s\v\h\p\f\h\x\j\7\e\v\u\e\e\r\o\d\d\7\5\0\z\7\6\e\1\x\q\y\m\s\o\o\q\a\x\0\h\3\p\a\t\f\7\v\b\c\z\0\x\x\x\5\k\5\3\v\c\e\t\1\h\4\l\l\1\u\i\5\4\g\9\f\k\q\m\t\x\r\j\p\n\z\c\s\1\c\z\f\4\8\a\3\u\u\3\9\f\r\i\0\h\4\z\o\4\t\2\7\w\x\n\u\r\s\p\c\p\v\r\w\g\o\u\t\7\u\9\h\k\r\c\h\9\9\u\m\1\j\9\1\4\t\t\j\e\m\c\w\w\2\b\7\u\4\0\k\f\f\u\p\o\l\0\7\f\l\c\o\8\r\s\w\6\9\1\u\f\m\b\d\3\3\4\u\y\6\d\6\n\2\k\b\n\k\b\i\5\w\2\i\e\j\a\d\h\o\p\s\u\9\o\8\k\b\c\y\a\8\3\g\q\j\1\e\h\4\v\2\k\h\6\s\z\i\3\d\b\q\u\i\h\t\p\3\2\d\z\p\b\x\e\p\z\0\n\3\t\y\9\a\o\1\m\4\l\x\s\v\s\l\q\3\y\7\s\s\7\v\m\w\e\4\3\i\9\a\7\p\i\k\n\d\b\u\9\8\7\9\p\c\1\0\y\j\q\w\w\6\b\i\w\y\g\j\c\e\9\c\0\a\o\l\o\x\8\b\f\0\r\q\1\l\w\q\7\r\s\j\a\b\7\u\o\c\e\u\s\z\o\i\c\b\z\e\e\e\g\7\x\l\s\i\e\x\o\d\e\d\y\o\m\r\2\2\j\5\6\w\b\j\5\1\d\1\w\y\8\k\9\i\g\q\q\c\y\8\v\s\o\8\p\t\y\b\k\s\l\3\h\v\n\4\w\d\d\8\2\j\i\p\2\t\n\j\s\i\w\7\m\a\g\c\a\h\w\p\i\o\g\r\z\9\u\4\a\h\c\m\s\q\y\b\j\e\0\v\t\g\s\t\q\b\1\0\6\5\7\7\g\l\1\6\h\b\p\a\u\6\9\d\o\a\a\y\j\c\k\m\0\b\8\8\l\d\i\v\w\m\m\0\4\c\9\h\6\z\n\p\6\z\t\4\c\3\b\z\7\l\5\6\z\q\v\w\u\p\n\i\7\n\2\y\4\6\4\o\8\u\9\5\f\f\y\f\2\y\e\q\6\2\c\k\k\o\d\0\z\t\j\5\1\c\u\v\y\o\n\a\1\r\a\4\n\x\4\j\q\6\6\c\e\2\m\u\c\3\6\n\p\j\g\0\f\5\u\3\h\m\a\r\i\f\f\n\z\y\c\c\l\3\k\j\c\f\2\3\0\l\h\i\9\t\x\i\u\3\b\p\k\o\5\r\4\1\g\5\a\l\b\x\x\m\m\y\9\x\8\u\3\a\e\r\3\n\1\x\u\5\q\d\r\o\7\3\k\7\o\h\v\z\4\d\8\8\2\w\c\t\m\n\t\e\2\b\b\2\3\y\3\n\r\n\5\0\n\r\h\2\n\q\0\x\0\s\s\f\b\b\m\q\5\r\8\e\p\c\z\2\k\6\i\s\7\1\0\0\4\0\2\f\2\9\6\m\i\d\m\8\r\2\c\r\p\f\9\f\s\t\g\3\t\r\1\a\0\y\2\n\0\l\f\l\z\5\d\0\1\m\z\r\g\2\u\e\s\0\g\c\a\2\z\g\p\u\h\q\t\c\a\2\b\y\j\b\n\t\m\i\n\q\s\3\u\k\2\0\6\n\5\e\o\w\g\w\f\4\g\a\v\p\e\j\a\9\l\g\i\e\d\k\2\w\8\p\u\f\6\m\1\l\5\u\j\m\p\3\j\d\g\7\2\v\g\j\f\y\e\i\t\j\d\a\3\z\o\v\u\5\w\d\y\a\y\o\a\v\h\b\p\b\7\l\c\n\r\4\w\d\3\f\2\3\8\p\x\7\b\c\q\j\p\g\k\h\m\z\k\x\3\0\y\t\c\v\k\m\0\8\i\w\x\0\3\1\9\t\2\y\7\o\t\d\t\y\o\k\q\i\j\z\8\x\w\k\e\z\0\5\i\f\x\q\2\u\j\t\l\b\n\9\b\q\e\r\6\0\3\y\r\o\9\g\u\u\q\5\x\x\f\k\b\9\b\m\r\v\4\o\0\8\r\b\r\t\h\n\6\x\a\3\a\2\e\t\c\f\t\q\u\s\4\t\2\t\f\b\e\a\w\i\s\j\h\2\l\a\a\1\2\h\4\9\k\x\o\2\v\3\m\i\a\x\x\f\h\0\z\v\3\r\r\g\h\d\k\p\i\p\s\l\j\k\h\o\8\9\9\t\v\0\5\l\o\n\j\3\k\7\i\2\y\o\3\f\6\w\y\d\f\1\8\5\g\i\d\4\6\d\g\s\o\e\5\o\3\8\e\d\j\s\a\o\d\b\k\c\x\1\8\g\1\a\s\q\2\p\b\v\s\u\i\m\b\8\t\0\c\p\q\u\a\k\p\7\k\n\n\m\2\e\0\e\o\2\c\j\c\y\n\6\h\x\m\0\v\r\b\l\d\8\g\k\n\3\b\3\p\5\z\a\p\q\b\d\l\s\0\y\y\v\7\s\i\d\m\f\1\1\q\x\h\h\9\j\n\2\3\6\t\5\g\8\o\x\u\m\z\y\s\y\k\u\1\4\x\q\7\m\c\c\a\7\4\e\n\a\s\0\r\7\l\g\3\4\6\y\h\7\6\n\p\y\5\u\a\8\x\0\w\9\b\c\z\h\w\i\o\5\h\d\7\8\n\0\3\j\c\3\a\q\q\1\a\f\r\3\e\w\e\q\5\v\9\n\j\g\e\n\z\3\2\r\a\c\l\s\b\1\x\7\f\0\8\i\v\1\v\5\s\t\r\l\5\0\i\7\g\2\f\8\j\h\s\j\g\n\s\l\2\7\r\8\5\4\v\l\t\d\t\4\k\w\o\5\l\h\4\i\e\5\s\p\b\y\p\b\i\1\d\3\x\w\u\m\l\a\l\t\v\f\z\h\5\v\n\m\x\1\g\7\h\v\n\8\t\v\2\a\b\h\4\a\v\d\n\u\1\i\l\4\o\c\6\m\6\7\0\6\u\2\d\6\n\k\2\y\8\e\k\x\5\e\0\o\i\p\k\x\h\i\w\i\p\4\4\o\j\c\7\1\k\4\r\j\1\8\h\u\2\g\c\v\u\8\c\1\3\4\a\o\y\j\e\d\d\4\j\s\m\4\o\4\z\o\o\c\2\j\i\4\o\0\i\a\5\h\h\r\1\x\9\f\c\l\u\b\m\0\w\1\7\f\1\m\s\z\u\m\h\6\9\o\l\d\p\5\h\6\6\p\l\6\g\m\k\k\e\h\w\f\p\w\o\1\i\g\c\s\6\9\q\t\r\g\i\n\l\2\1\w\l\y\n\y\1\l\7\x\2\a\9\7\3\b\k\l\3\q\4\9\u\c\1\g\b\g\r\q\p\z\8\p\m\e\t\4\c\a\f\w\a\7\e\i\z\u\y\7\a\c\m\g\j\i\r\v\1\o\z\7\r\c\f\c\r\7\i\m\a\z\7\n\0\l\2\s\m\1\k\d\o\v\b\2\d\y\2\f\d\p\8\1\4\f\n\3\y\c\7\w\k\2\t\u\9\t\o\w\9\w\w\m\8\o\x\x\b\j\9\g\k\6\1\2\7\i\c\g\o\j\b\z\j\1\8\v\v\1\8\5\5\m\g\8\p\w\5\4\w\0\v\m\2\z\6\s\e\v\u\0\b\b\v\9\t\u\u\b\4\2\h\m\a\k\y\g\2\7\9\t\8\n\3\p\1\g\g\i\r\4\5\i\i\1\3\k\s\9\a\a\5\m\o\n\z\r\f\c\5\1\z\7\t\8\e\z\p\p\g\d\2\e\3\l\0\u\6\f\k\s\1\5\9\g\8\1\d\b\k\v\h\y\u\g\z\l\d\7\j\k\2\o\q\m\n\h\z\s\p\r\e\a\4\3\g\u\0\4\2\0\v\r\c\c\1\u\y\v\0\3\y\d\f\p\n\w\7\4\m\j\c\k\s\h\m\o\k\v\y\u\l\o\9\n\p\4\d\2\0\4\n\m\7\b\h\j\h\v\p\1\4\n\w\f\7\t\e\r\k\s\o\y\1\1\n\7\y\h\v\4\7\t\o\h\e\5\f\4\9\k\b\d\s\c\v\k\8\z\4\a\y\x\6\4\5\d\7\r\c\x\j\t\s\a\1\a\n\y\s\i\u\y\4\4\n\d\j\7\2\3\k\o\m\a\q\s\4\l\j\s\7\a\x\6\d\e\j\v\h\b\4\f\o\0\a\r\q\2\1\i\b\1\g\b\k\3\4\s\i\u\g\j\f\i\b\7\9\2\u\1\b\o\d\9\g\z\i\h\m\m\6\l\f\p\1\c\2\r\u\v\6\s\z\i\8\v\w\j\v\8\s\x\n\u\n\x\r\2\y\h\p\d\x\f\0\2\e\r\b\d\j\7\e\z\u\t\b\x\3\2\6\8\i\m\z\k\t\4\2\8\l\s\6\n\n\l\s\a\q\t\q\2\q\c\2\v\x\x\5\r\g\u\l\y\w\9\s\p\4\9\d\w\u\4\j\e\z\v\0\9\e\l\g\a\a\e\u\w\a\c\y\c\r\f\o\v\h\1\2\n\a\j\i\5\5\t\v\d\i\h\j\0\z\v\w\0\9\9\u\7\p\2\e\0\y\o\8\8\t\w\4\c\u\r\a\h\f\y\y\b\v\p\i\n\j\x\b\h\v\5\a\v\t\h\d\6\a\d\v\2\v\d\h\n\w\t\0\i\z\o\4\j\n\c\o\t\m\h\z\j\a\n\f\u\l\k\m\c\u\i\x\k\h\w\x\b\5\r\3\q\q\6\4\p\0\l\o\l\k\6\l\n\w\3\f\l\d\m\t\2\j\5\y\e\z\o\i\5\0\3\s\4\j\i\2\2\3\6\3\z\v\9\p\v\2\v\l\i\w\e\h\8\3\b\h\i\o\h\c\1\8\j\g\p\n\i\p\9\p\w\d\r\c\8\n\s\r\9\z\y\y\4\s\1\e\8\j\3\4\a\g\0\3\a\k\t\x\c\7\q\q\1\m\c\h\7\n\7\m\p\s\r\r\0\4\h\l\g\s\p\z\a\a\j\n\p\6\h\n\u\4\9\n\z\4\g\x\2\m\d\e\9\6\j\s\b\u\e\j\g\b\t\p\n\s\h\g\u\d\w\c\m\2\p\g\0\w\w\p\p\d\u\o\x\q\v\k\3\w\j\c\f\u\8\k\o\n\f\r\i\x\4\a\h\y\2\o\d\k\r\6\5\f\t\1\h\9\d\8\z\w\y\7\o\y\x\e\a\v\x\4\1\t\8\g\w\l\o\r\0\n\1\r\f\r\4\g\m\l\a\1\2\6\p\i\q\1\c\9\7\7\7\x\b\0\x\d\z\2\i\d\f\l\h\f\i\3\f\0\b\1\w\f\3\j\8\n\0\4\j\3\n\j\k\u\5\r\g\v\w\b\e\f\s\c\q\v\v\x\m\c\7\k\x\p\j\4\c\1\e\t\4\j\g\b\j\k\z\u\f\q\j\d\q\u\8\1\o\0\e\3\z\q\s\i\0\3\f\n\l\h\0\p\t\h\n\z\1\n\t\w\b\i\i\9\m\p\g\r\d\z\o\2\m\w\c\4\q\z\k\4\h\i\p\j\o\2\q\0\u\b\j\b\t\u\5\x\f\l\m\h\i\m\3\7\k\f\s\g\y\k\0\v\k\z\j\9\s\i\4\5\e\d\3\5\r\v\m\i\i\4\h\5\9\q\6\y\g\p\p\c\v\8\u\w\3\n\d\x\k\d\v\6\n\w\9\g\v\u\p\8\5\i\a\f\j\z\h\5\g\t\c\s\2\7\i\e\u\h\w\r\0\z\i\2\s\a\6\5\z\q\l\q\x\g\5\u\r\o\4\y\f\k\h\s\c\e\1\1\j\n\2\o\x\t\8\u\7\0\5\d\0\u\w\l\r\4\t\7\0\e\l\c\s\0\6\1\a\u\4\j\r\n\t\4\c\b\r\g\q\1\k\c\o\h\8\s\2\n\x\h\q\r\w\b\p\b\t\c\l\5\h\4\4\x\k\l\2\w\y\y\3\1\o\t\i\0\i\o\1\x\4\3\k\t\b\2\x\d\o\r\w\2\i\9\z\v\d\t\w\3\0\u\3\k\7\2\w\4\2\f\n\8\x\r\a\b\d\1\2\f\k\m\6\5\2\1\b\v\p\s\2\1\4\b\s\2\e\u\5\h\l\i\p\b\s\t\c\t\e\0\p\l\r\k\z\8\0\6\v\t\w\q\e\s\6\a\5\n\a\s\6\c\w\t\5\v\3\l\x\7\w\w\d\w\8\j\m\u\k\u\y\f\z\i\1\k\a\j\g\e\7\7\o\9\l\c\w\b\q\e\g\f\z\4\j\u\u\h\j\x\c\f\t\q\k\u\u\6\v\z\1\5\4\i\w\0\x\r\u\a\2\3\q\9\4\t\5\u\i\1\4\w\j\8\2\m\3\b\g\1\f\9\f\p\r\6\n\3\j\2\e\1\i\j\9\r\z\7\c\5\m\b\8\t\b\9\2\c\i\e\z\2\c\t\u\c\q\2\k\w\o\b\e\o\c\w\j\5\g\f\v\p\k\8\v\p\w\g\i\7\n\1\z\8\3\s\z\c\w\w\4\m\s\x\d\i\o\m\s\g\3\1\z\e\9\a\g\p\x\z\3\2\b\t\f\p\r\m\j\k\f\q\6\y\k\p\w\4\n\7\g\4\4\w\t\y\e\1\r\4\z\c\g\q\0\o\z\1\u\9\4\2\3\y\e\m\0\k\9\6\v\o\5\s\m\o\y\4\e\w\x\n\5\k\o\w\g\o\e\s\k\i\6\8\4\s\e\y\7\s\y\y\6\s\e\7\9\h\y\y\d\9\5\2\2\3\9\3\k\i\3\k\f\0\9\2\2\5\v\h\9\j\f\w\q\t\c\m\w\h\i\7\z\0\g\y\m\h\q\t\n\k\x\a\b\d\6\9\y\3\e\7\t\w\5\a\c\e\n\m\y\k\7\q\m\l\q\q\5\m\m\a\x\p\9\9\n\d\s\w\m\4\8\6\1\b\g\i\7\2\x\c\o\0\8\q\f\6\5\q\4\r\9\z\q\l\o\m\j\1\2\7\w\z\7\0\z\4\9\q\x\u\1\d\r\y\s\y\a\x\h\0\8\8\c\h\v\k\a\t\e\x\4\4\e\l\m\e\o\b\s\c\k\c\b\2\7\7\l\y\a\u\s\m\7\9\s\m\h\5\x\c\e\4\2\6\h\a\o\2\g\5\s\t\t\4\r\l\c\l\f\6\d\n\7\7\1\8\y\3\k\4\2\y\5\2\7\8 ]] 00:08:33.846 00:08:33.846 real 0m1.144s 00:08:33.846 user 0m0.747s 00:08:33.846 sys 0m0.251s 00:08:33.846 14:59:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:33.846 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:08:33.846 ************************************ 00:08:33.846 END TEST dd_rw_offset 00:08:33.846 ************************************ 00:08:33.847 14:59:04 -- dd/basic_rw.sh@1 -- # cleanup 00:08:33.847 14:59:04 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:33.847 14:59:04 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:33.847 14:59:04 -- dd/common.sh@11 -- # local nvme_ref= 00:08:33.847 14:59:04 -- dd/common.sh@12 -- # local size=0xffff 00:08:33.847 14:59:04 -- dd/common.sh@14 -- # local bs=1048576 00:08:33.847 14:59:04 -- dd/common.sh@15 -- # local count=1 00:08:33.847 14:59:04 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:33.847 14:59:04 -- dd/common.sh@18 -- # gen_conf 00:08:33.847 14:59:04 -- dd/common.sh@31 -- # xtrace_disable 00:08:33.847 14:59:04 -- common/autotest_common.sh@10 -- # set +x 00:08:33.847 [2024-11-20 14:59:04.570108] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:33.847 [2024-11-20 14:59:04.570263] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69914 ] 00:08:33.847 { 00:08:33.847 "subsystems": [ 00:08:33.847 { 00:08:33.847 "subsystem": "bdev", 00:08:33.847 "config": [ 00:08:33.847 { 00:08:33.847 "params": { 00:08:33.847 "trtype": "pcie", 00:08:33.847 "traddr": "0000:00:06.0", 00:08:33.847 "name": "Nvme0" 00:08:33.847 }, 00:08:33.847 "method": "bdev_nvme_attach_controller" 00:08:33.847 }, 00:08:33.847 { 00:08:33.847 "method": "bdev_wait_for_examine" 00:08:33.847 } 00:08:33.847 ] 00:08:33.847 } 00:08:33.847 ] 00:08:33.847 } 00:08:34.105 [2024-11-20 14:59:04.709747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.105 [2024-11-20 14:59:04.752822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.105  [2024-11-20T14:59:05.167Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:34.363 00:08:34.363 14:59:05 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:34.363 00:08:34.363 real 0m16.383s 00:08:34.363 user 0m11.720s 00:08:34.363 sys 0m3.172s 00:08:34.363 14:59:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:34.363 14:59:05 -- common/autotest_common.sh@10 -- # set +x 00:08:34.363 ************************************ 00:08:34.363 END TEST spdk_dd_basic_rw 00:08:34.363 ************************************ 00:08:34.363 14:59:05 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:34.363 14:59:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:34.364 14:59:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.364 14:59:05 -- common/autotest_common.sh@10 -- # set +x 00:08:34.364 ************************************ 00:08:34.364 START TEST spdk_dd_posix 00:08:34.364 ************************************ 00:08:34.364 14:59:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:34.364 * Looking for test storage... 00:08:34.622 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:34.622 14:59:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:34.622 14:59:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:34.622 14:59:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:34.622 14:59:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:34.622 14:59:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:34.622 14:59:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:34.622 14:59:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:34.622 14:59:05 -- scripts/common.sh@335 -- # IFS=.-: 00:08:34.622 14:59:05 -- scripts/common.sh@335 -- # read -ra ver1 00:08:34.622 14:59:05 -- scripts/common.sh@336 -- # IFS=.-: 00:08:34.622 14:59:05 -- scripts/common.sh@336 -- # read -ra ver2 00:08:34.622 14:59:05 -- scripts/common.sh@337 -- # local 'op=<' 00:08:34.622 14:59:05 -- scripts/common.sh@339 -- # ver1_l=2 00:08:34.622 14:59:05 -- scripts/common.sh@340 -- # ver2_l=1 00:08:34.622 14:59:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:34.622 14:59:05 -- scripts/common.sh@343 -- # case "$op" in 00:08:34.622 14:59:05 -- scripts/common.sh@344 -- # : 1 00:08:34.622 14:59:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:34.622 14:59:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:34.622 14:59:05 -- scripts/common.sh@364 -- # decimal 1 00:08:34.622 14:59:05 -- scripts/common.sh@352 -- # local d=1 00:08:34.622 14:59:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:34.622 14:59:05 -- scripts/common.sh@354 -- # echo 1 00:08:34.622 14:59:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:34.622 14:59:05 -- scripts/common.sh@365 -- # decimal 2 00:08:34.622 14:59:05 -- scripts/common.sh@352 -- # local d=2 00:08:34.622 14:59:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:34.622 14:59:05 -- scripts/common.sh@354 -- # echo 2 00:08:34.622 14:59:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:34.622 14:59:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:34.622 14:59:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:34.622 14:59:05 -- scripts/common.sh@367 -- # return 0 00:08:34.622 14:59:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:34.622 14:59:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:34.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.622 --rc genhtml_branch_coverage=1 00:08:34.622 --rc genhtml_function_coverage=1 00:08:34.622 --rc genhtml_legend=1 00:08:34.622 --rc geninfo_all_blocks=1 00:08:34.622 --rc geninfo_unexecuted_blocks=1 00:08:34.622 00:08:34.622 ' 00:08:34.622 14:59:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:34.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.622 --rc genhtml_branch_coverage=1 00:08:34.622 --rc genhtml_function_coverage=1 00:08:34.622 --rc genhtml_legend=1 00:08:34.622 --rc geninfo_all_blocks=1 00:08:34.622 --rc geninfo_unexecuted_blocks=1 00:08:34.622 00:08:34.622 ' 00:08:34.622 14:59:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:34.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.623 --rc genhtml_branch_coverage=1 00:08:34.623 --rc genhtml_function_coverage=1 00:08:34.623 --rc genhtml_legend=1 00:08:34.623 --rc geninfo_all_blocks=1 00:08:34.623 --rc geninfo_unexecuted_blocks=1 00:08:34.623 00:08:34.623 ' 00:08:34.623 14:59:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:34.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:34.623 --rc genhtml_branch_coverage=1 00:08:34.623 --rc genhtml_function_coverage=1 00:08:34.623 --rc genhtml_legend=1 00:08:34.623 --rc geninfo_all_blocks=1 00:08:34.623 --rc geninfo_unexecuted_blocks=1 00:08:34.623 00:08:34.623 ' 00:08:34.623 14:59:05 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:34.623 14:59:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.623 14:59:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.623 14:59:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.623 14:59:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.623 14:59:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.623 14:59:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.623 14:59:05 -- paths/export.sh@5 -- # export PATH 00:08:34.623 14:59:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.623 14:59:05 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:34.623 14:59:05 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:34.623 14:59:05 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:34.623 14:59:05 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:34.623 14:59:05 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:34.623 14:59:05 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:34.623 14:59:05 -- dd/posix.sh@130 -- # tests 00:08:34.623 14:59:05 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:34.623 * First test run, liburing in use 00:08:34.623 14:59:05 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:34.623 14:59:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:34.623 14:59:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.623 14:59:05 -- common/autotest_common.sh@10 -- # set +x 00:08:34.623 ************************************ 00:08:34.623 START TEST dd_flag_append 00:08:34.623 ************************************ 00:08:34.623 14:59:05 -- common/autotest_common.sh@1114 -- # append 00:08:34.623 14:59:05 -- dd/posix.sh@16 -- # local dump0 00:08:34.623 14:59:05 -- dd/posix.sh@17 -- # local dump1 00:08:34.623 14:59:05 -- dd/posix.sh@19 -- # gen_bytes 32 00:08:34.623 14:59:05 -- dd/common.sh@98 -- # xtrace_disable 00:08:34.623 14:59:05 -- common/autotest_common.sh@10 -- # set +x 00:08:34.623 14:59:05 -- dd/posix.sh@19 -- # dump0=qe9h9wm7jub2ln3c44a9otwwfmn6hjs6 00:08:34.623 14:59:05 -- dd/posix.sh@20 -- # gen_bytes 32 00:08:34.623 14:59:05 -- dd/common.sh@98 -- # xtrace_disable 00:08:34.623 14:59:05 -- common/autotest_common.sh@10 -- # set +x 00:08:34.623 14:59:05 -- dd/posix.sh@20 -- # dump1=7jdv2nm2qo4zuv71xu51kvr8ictf7xqz 00:08:34.623 14:59:05 -- dd/posix.sh@22 -- # printf %s qe9h9wm7jub2ln3c44a9otwwfmn6hjs6 00:08:34.623 14:59:05 -- dd/posix.sh@23 -- # printf %s 7jdv2nm2qo4zuv71xu51kvr8ictf7xqz 00:08:34.623 14:59:05 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:34.623 [2024-11-20 14:59:05.380957] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:34.623 [2024-11-20 14:59:05.381918] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69978 ] 00:08:34.881 [2024-11-20 14:59:05.522597] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.881 [2024-11-20 14:59:05.559211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.881  [2024-11-20T14:59:05.943Z] Copying: 32/32 [B] (average 31 kBps) 00:08:35.139 00:08:35.139 14:59:05 -- dd/posix.sh@27 -- # [[ 7jdv2nm2qo4zuv71xu51kvr8ictf7xqzqe9h9wm7jub2ln3c44a9otwwfmn6hjs6 == \7\j\d\v\2\n\m\2\q\o\4\z\u\v\7\1\x\u\5\1\k\v\r\8\i\c\t\f\7\x\q\z\q\e\9\h\9\w\m\7\j\u\b\2\l\n\3\c\4\4\a\9\o\t\w\w\f\m\n\6\h\j\s\6 ]] 00:08:35.139 00:08:35.139 real 0m0.463s 00:08:35.139 user 0m0.238s 00:08:35.139 sys 0m0.101s 00:08:35.139 14:59:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:35.139 ************************************ 00:08:35.139 END TEST dd_flag_append 00:08:35.139 ************************************ 00:08:35.139 14:59:05 -- common/autotest_common.sh@10 -- # set +x 00:08:35.139 14:59:05 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:35.139 14:59:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:35.139 14:59:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.139 14:59:05 -- common/autotest_common.sh@10 -- # set +x 00:08:35.139 ************************************ 00:08:35.139 START TEST dd_flag_directory 00:08:35.139 ************************************ 00:08:35.139 14:59:05 -- common/autotest_common.sh@1114 -- # directory 00:08:35.139 14:59:05 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:35.139 14:59:05 -- common/autotest_common.sh@650 -- # local es=0 00:08:35.139 14:59:05 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:35.139 14:59:05 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.139 14:59:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.139 14:59:05 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.139 14:59:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.139 14:59:05 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.139 14:59:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.139 14:59:05 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.139 14:59:05 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.139 14:59:05 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:35.139 [2024-11-20 14:59:05.882787] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:35.139 [2024-11-20 14:59:05.882952] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70005 ] 00:08:35.398 [2024-11-20 14:59:06.022547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.398 [2024-11-20 14:59:06.065099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.398 [2024-11-20 14:59:06.111206] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:35.398 [2024-11-20 14:59:06.111263] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:35.398 [2024-11-20 14:59:06.111277] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:35.398 [2024-11-20 14:59:06.175047] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:35.657 14:59:06 -- common/autotest_common.sh@653 -- # es=236 00:08:35.657 14:59:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:35.657 14:59:06 -- common/autotest_common.sh@662 -- # es=108 00:08:35.657 14:59:06 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:35.657 14:59:06 -- common/autotest_common.sh@670 -- # es=1 00:08:35.657 14:59:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:35.657 14:59:06 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:35.657 14:59:06 -- common/autotest_common.sh@650 -- # local es=0 00:08:35.657 14:59:06 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:35.657 14:59:06 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.657 14:59:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.657 14:59:06 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.657 14:59:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.657 14:59:06 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.657 14:59:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.657 14:59:06 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.657 14:59:06 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.657 14:59:06 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:35.657 [2024-11-20 14:59:06.309526] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:35.657 [2024-11-20 14:59:06.309693] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70009 ] 00:08:35.657 [2024-11-20 14:59:06.447214] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.916 [2024-11-20 14:59:06.487876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.916 [2024-11-20 14:59:06.540586] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:35.916 [2024-11-20 14:59:06.540692] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:35.916 [2024-11-20 14:59:06.540715] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:35.916 [2024-11-20 14:59:06.612165] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:35.916 14:59:06 -- common/autotest_common.sh@653 -- # es=236 00:08:35.916 14:59:06 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:35.916 14:59:06 -- common/autotest_common.sh@662 -- # es=108 00:08:35.916 14:59:06 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:35.916 14:59:06 -- common/autotest_common.sh@670 -- # es=1 00:08:35.916 14:59:06 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:35.916 00:08:35.916 real 0m0.866s 00:08:35.916 user 0m0.431s 00:08:35.916 sys 0m0.223s 00:08:35.916 14:59:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:35.916 14:59:06 -- common/autotest_common.sh@10 -- # set +x 00:08:35.916 ************************************ 00:08:35.916 END TEST dd_flag_directory 00:08:35.916 ************************************ 00:08:35.916 14:59:06 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:35.916 14:59:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:35.916 14:59:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.175 14:59:06 -- common/autotest_common.sh@10 -- # set +x 00:08:36.175 ************************************ 00:08:36.175 START TEST dd_flag_nofollow 00:08:36.175 ************************************ 00:08:36.175 14:59:06 -- common/autotest_common.sh@1114 -- # nofollow 00:08:36.175 14:59:06 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:36.175 14:59:06 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:36.175 14:59:06 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:36.175 14:59:06 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:36.175 14:59:06 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:36.175 14:59:06 -- common/autotest_common.sh@650 -- # local es=0 00:08:36.175 14:59:06 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:36.175 14:59:06 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.175 14:59:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.175 14:59:06 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.175 14:59:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.175 14:59:06 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.175 14:59:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.175 14:59:06 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.175 14:59:06 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:36.175 14:59:06 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:36.175 [2024-11-20 14:59:06.777990] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:36.175 [2024-11-20 14:59:06.778099] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70043 ] 00:08:36.175 [2024-11-20 14:59:06.908406] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.175 [2024-11-20 14:59:06.950518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.433 [2024-11-20 14:59:07.005298] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:36.433 [2024-11-20 14:59:07.005395] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:36.433 [2024-11-20 14:59:07.005420] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:36.433 [2024-11-20 14:59:07.076006] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:36.433 14:59:07 -- common/autotest_common.sh@653 -- # es=216 00:08:36.433 14:59:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:36.433 14:59:07 -- common/autotest_common.sh@662 -- # es=88 00:08:36.433 14:59:07 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:36.433 14:59:07 -- common/autotest_common.sh@670 -- # es=1 00:08:36.433 14:59:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:36.433 14:59:07 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:36.433 14:59:07 -- common/autotest_common.sh@650 -- # local es=0 00:08:36.433 14:59:07 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:36.433 14:59:07 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.433 14:59:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.434 14:59:07 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.434 14:59:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.434 14:59:07 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.434 14:59:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.434 14:59:07 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.434 14:59:07 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:36.434 14:59:07 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:36.434 [2024-11-20 14:59:07.206225] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:36.434 [2024-11-20 14:59:07.206373] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70047 ] 00:08:36.693 [2024-11-20 14:59:07.344845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.693 [2024-11-20 14:59:07.387013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.693 [2024-11-20 14:59:07.440785] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:36.693 [2024-11-20 14:59:07.440873] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:36.693 [2024-11-20 14:59:07.440901] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:36.952 [2024-11-20 14:59:07.512857] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:36.952 14:59:07 -- common/autotest_common.sh@653 -- # es=216 00:08:36.952 14:59:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:36.952 14:59:07 -- common/autotest_common.sh@662 -- # es=88 00:08:36.952 14:59:07 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:36.952 14:59:07 -- common/autotest_common.sh@670 -- # es=1 00:08:36.952 14:59:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:36.952 14:59:07 -- dd/posix.sh@46 -- # gen_bytes 512 00:08:36.952 14:59:07 -- dd/common.sh@98 -- # xtrace_disable 00:08:36.952 14:59:07 -- common/autotest_common.sh@10 -- # set +x 00:08:36.952 14:59:07 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:36.952 [2024-11-20 14:59:07.636892] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:36.952 [2024-11-20 14:59:07.636993] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70060 ] 00:08:37.210 [2024-11-20 14:59:07.768423] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.210 [2024-11-20 14:59:07.804594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.210  [2024-11-20T14:59:08.014Z] Copying: 512/512 [B] (average 500 kBps) 00:08:37.210 00:08:37.210 14:59:08 -- dd/posix.sh@49 -- # [[ gfmb9nadrym6vxfyo6dk4htdmvabf5hcmrctx3h8cbjvz53znzhc71kvw4wntlq3zq0fcbmn0oksu1m4956qkxec8edw99xx2yb7n2iyg9soipzngpzokos11vqw4r0xic384p7yq3554y0e0pj9kd6qohlo63ei1amtphlajogo5zht3mav9php0g2mlomqosuqsdxz64j6n8camk5towgqgfk11u1qtodco4jsqr7bbzb3k3vup649d24829qy3cchrot8jki12bcp7pew3jvu58ft4jyt9hzfwfzu21ojgw1t6dtvbiz9xhah9ai8oqybemqlhd7qdvgdujjz9vg8i7nequ2slzxwian4hxpoairq5g7q0k1wqm8v6gqhinvqx2y6myt29xie7lxspldx3lp3kbcbar0k8gd3qh42gu2fz3b8pi64yodcy04al6m2nhczw9zbv1pqou53aiq2ia6ks2w1h94y8hz6yyuxbcd1m182y5arb4jmapit == \g\f\m\b\9\n\a\d\r\y\m\6\v\x\f\y\o\6\d\k\4\h\t\d\m\v\a\b\f\5\h\c\m\r\c\t\x\3\h\8\c\b\j\v\z\5\3\z\n\z\h\c\7\1\k\v\w\4\w\n\t\l\q\3\z\q\0\f\c\b\m\n\0\o\k\s\u\1\m\4\9\5\6\q\k\x\e\c\8\e\d\w\9\9\x\x\2\y\b\7\n\2\i\y\g\9\s\o\i\p\z\n\g\p\z\o\k\o\s\1\1\v\q\w\4\r\0\x\i\c\3\8\4\p\7\y\q\3\5\5\4\y\0\e\0\p\j\9\k\d\6\q\o\h\l\o\6\3\e\i\1\a\m\t\p\h\l\a\j\o\g\o\5\z\h\t\3\m\a\v\9\p\h\p\0\g\2\m\l\o\m\q\o\s\u\q\s\d\x\z\6\4\j\6\n\8\c\a\m\k\5\t\o\w\g\q\g\f\k\1\1\u\1\q\t\o\d\c\o\4\j\s\q\r\7\b\b\z\b\3\k\3\v\u\p\6\4\9\d\2\4\8\2\9\q\y\3\c\c\h\r\o\t\8\j\k\i\1\2\b\c\p\7\p\e\w\3\j\v\u\5\8\f\t\4\j\y\t\9\h\z\f\w\f\z\u\2\1\o\j\g\w\1\t\6\d\t\v\b\i\z\9\x\h\a\h\9\a\i\8\o\q\y\b\e\m\q\l\h\d\7\q\d\v\g\d\u\j\j\z\9\v\g\8\i\7\n\e\q\u\2\s\l\z\x\w\i\a\n\4\h\x\p\o\a\i\r\q\5\g\7\q\0\k\1\w\q\m\8\v\6\g\q\h\i\n\v\q\x\2\y\6\m\y\t\2\9\x\i\e\7\l\x\s\p\l\d\x\3\l\p\3\k\b\c\b\a\r\0\k\8\g\d\3\q\h\4\2\g\u\2\f\z\3\b\8\p\i\6\4\y\o\d\c\y\0\4\a\l\6\m\2\n\h\c\z\w\9\z\b\v\1\p\q\o\u\5\3\a\i\q\2\i\a\6\k\s\2\w\1\h\9\4\y\8\h\z\6\y\y\u\x\b\c\d\1\m\1\8\2\y\5\a\r\b\4\j\m\a\p\i\t ]] 00:08:37.210 00:08:37.210 real 0m1.275s 00:08:37.210 user 0m0.631s 00:08:37.210 sys 0m0.312s 00:08:37.210 14:59:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:37.210 14:59:08 -- common/autotest_common.sh@10 -- # set +x 00:08:37.210 ************************************ 00:08:37.210 END TEST dd_flag_nofollow 00:08:37.210 ************************************ 00:08:37.510 14:59:08 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:37.510 14:59:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:37.510 14:59:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.510 14:59:08 -- common/autotest_common.sh@10 -- # set +x 00:08:37.510 ************************************ 00:08:37.510 START TEST dd_flag_noatime 00:08:37.510 ************************************ 00:08:37.510 14:59:08 -- common/autotest_common.sh@1114 -- # noatime 00:08:37.510 14:59:08 -- dd/posix.sh@53 -- # local atime_if 00:08:37.510 14:59:08 -- dd/posix.sh@54 -- # local atime_of 00:08:37.510 14:59:08 -- dd/posix.sh@58 -- # gen_bytes 512 00:08:37.510 14:59:08 -- dd/common.sh@98 -- # xtrace_disable 00:08:37.510 14:59:08 -- common/autotest_common.sh@10 -- # set +x 00:08:37.510 14:59:08 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:37.510 14:59:08 -- dd/posix.sh@60 -- # atime_if=1732114747 00:08:37.510 14:59:08 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:37.510 14:59:08 -- dd/posix.sh@61 -- # atime_of=1732114748 00:08:37.510 14:59:08 -- dd/posix.sh@66 -- # sleep 1 00:08:38.445 14:59:09 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:38.445 [2024-11-20 14:59:09.121851] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:38.445 [2024-11-20 14:59:09.122000] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70095 ] 00:08:38.703 [2024-11-20 14:59:09.261023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.703 [2024-11-20 14:59:09.303479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.703  [2024-11-20T14:59:09.765Z] Copying: 512/512 [B] (average 500 kBps) 00:08:38.961 00:08:38.961 14:59:09 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:38.961 14:59:09 -- dd/posix.sh@69 -- # (( atime_if == 1732114747 )) 00:08:38.961 14:59:09 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:38.961 14:59:09 -- dd/posix.sh@70 -- # (( atime_of == 1732114748 )) 00:08:38.961 14:59:09 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:38.961 [2024-11-20 14:59:09.601119] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:38.961 [2024-11-20 14:59:09.601262] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70112 ] 00:08:38.961 [2024-11-20 14:59:09.739834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.219 [2024-11-20 14:59:09.783138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.219  [2024-11-20T14:59:10.023Z] Copying: 512/512 [B] (average 500 kBps) 00:08:39.219 00:08:39.219 14:59:10 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:39.219 14:59:10 -- dd/posix.sh@73 -- # (( atime_if < 1732114749 )) 00:08:39.219 00:08:39.219 real 0m1.975s 00:08:39.219 user 0m0.487s 00:08:39.219 sys 0m0.235s 00:08:39.219 14:59:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:39.219 14:59:10 -- common/autotest_common.sh@10 -- # set +x 00:08:39.219 ************************************ 00:08:39.219 END TEST dd_flag_noatime 00:08:39.219 ************************************ 00:08:39.477 14:59:10 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:39.477 14:59:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:39.477 14:59:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:39.477 14:59:10 -- common/autotest_common.sh@10 -- # set +x 00:08:39.477 ************************************ 00:08:39.477 START TEST dd_flags_misc 00:08:39.477 ************************************ 00:08:39.477 14:59:10 -- common/autotest_common.sh@1114 -- # io 00:08:39.477 14:59:10 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:39.477 14:59:10 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:39.477 14:59:10 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:39.477 14:59:10 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:39.477 14:59:10 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:39.477 14:59:10 -- dd/common.sh@98 -- # xtrace_disable 00:08:39.477 14:59:10 -- common/autotest_common.sh@10 -- # set +x 00:08:39.477 14:59:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:39.477 14:59:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:39.477 [2024-11-20 14:59:10.114216] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:39.477 [2024-11-20 14:59:10.114313] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70133 ] 00:08:39.477 [2024-11-20 14:59:10.245366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.736 [2024-11-20 14:59:10.287113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.736  [2024-11-20T14:59:10.540Z] Copying: 512/512 [B] (average 500 kBps) 00:08:39.736 00:08:39.736 14:59:10 -- dd/posix.sh@93 -- # [[ fgnwxi8bb26y3wocmw09k9yw5w0q8k7my0lok2pogr055tmdleak93pa0b1dmweqcud342n14g1jd18kkbyi6bvu3nxjfrpp14hf9nxocnkz3enc0r83smornyey6uhohcirclkyppgw34fnla3v8nqgn97cnbfci6gybyhxxct37ngo9aj4io70213kdzya0mrx2h38m1xbd0lizpey5gc4xlikghqnlpwvrh8yj10uiqpyxo2slkg0my5st148h76y7ag5i6yw2347ux3ybnngkmec915qfst7om6x7v3mx8883h8ffi2mljli64oc9l7vs3jwj0mw22hh3rit46n4yiook6uo9lwwds3sk34bieqyq0gqkw3ohnvn5t8r4hv8nn3u7c7stqodis6x1s1mge9f72u5v4db8jzh1rba49wc8euh4ljbvsdwqacvy390vg4gsabb805ec9a91i275igxqvh5xkxroml7oc2bwifr47twn9uppjomcvyf == \f\g\n\w\x\i\8\b\b\2\6\y\3\w\o\c\m\w\0\9\k\9\y\w\5\w\0\q\8\k\7\m\y\0\l\o\k\2\p\o\g\r\0\5\5\t\m\d\l\e\a\k\9\3\p\a\0\b\1\d\m\w\e\q\c\u\d\3\4\2\n\1\4\g\1\j\d\1\8\k\k\b\y\i\6\b\v\u\3\n\x\j\f\r\p\p\1\4\h\f\9\n\x\o\c\n\k\z\3\e\n\c\0\r\8\3\s\m\o\r\n\y\e\y\6\u\h\o\h\c\i\r\c\l\k\y\p\p\g\w\3\4\f\n\l\a\3\v\8\n\q\g\n\9\7\c\n\b\f\c\i\6\g\y\b\y\h\x\x\c\t\3\7\n\g\o\9\a\j\4\i\o\7\0\2\1\3\k\d\z\y\a\0\m\r\x\2\h\3\8\m\1\x\b\d\0\l\i\z\p\e\y\5\g\c\4\x\l\i\k\g\h\q\n\l\p\w\v\r\h\8\y\j\1\0\u\i\q\p\y\x\o\2\s\l\k\g\0\m\y\5\s\t\1\4\8\h\7\6\y\7\a\g\5\i\6\y\w\2\3\4\7\u\x\3\y\b\n\n\g\k\m\e\c\9\1\5\q\f\s\t\7\o\m\6\x\7\v\3\m\x\8\8\8\3\h\8\f\f\i\2\m\l\j\l\i\6\4\o\c\9\l\7\v\s\3\j\w\j\0\m\w\2\2\h\h\3\r\i\t\4\6\n\4\y\i\o\o\k\6\u\o\9\l\w\w\d\s\3\s\k\3\4\b\i\e\q\y\q\0\g\q\k\w\3\o\h\n\v\n\5\t\8\r\4\h\v\8\n\n\3\u\7\c\7\s\t\q\o\d\i\s\6\x\1\s\1\m\g\e\9\f\7\2\u\5\v\4\d\b\8\j\z\h\1\r\b\a\4\9\w\c\8\e\u\h\4\l\j\b\v\s\d\w\q\a\c\v\y\3\9\0\v\g\4\g\s\a\b\b\8\0\5\e\c\9\a\9\1\i\2\7\5\i\g\x\q\v\h\5\x\k\x\r\o\m\l\7\o\c\2\b\w\i\f\r\4\7\t\w\n\9\u\p\p\j\o\m\c\v\y\f ]] 00:08:39.736 14:59:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:39.736 14:59:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:39.994 [2024-11-20 14:59:10.556203] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:39.994 [2024-11-20 14:59:10.556365] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70141 ] 00:08:39.994 [2024-11-20 14:59:10.698161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.994 [2024-11-20 14:59:10.733801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.994  [2024-11-20T14:59:11.057Z] Copying: 512/512 [B] (average 500 kBps) 00:08:40.253 00:08:40.253 14:59:10 -- dd/posix.sh@93 -- # [[ fgnwxi8bb26y3wocmw09k9yw5w0q8k7my0lok2pogr055tmdleak93pa0b1dmweqcud342n14g1jd18kkbyi6bvu3nxjfrpp14hf9nxocnkz3enc0r83smornyey6uhohcirclkyppgw34fnla3v8nqgn97cnbfci6gybyhxxct37ngo9aj4io70213kdzya0mrx2h38m1xbd0lizpey5gc4xlikghqnlpwvrh8yj10uiqpyxo2slkg0my5st148h76y7ag5i6yw2347ux3ybnngkmec915qfst7om6x7v3mx8883h8ffi2mljli64oc9l7vs3jwj0mw22hh3rit46n4yiook6uo9lwwds3sk34bieqyq0gqkw3ohnvn5t8r4hv8nn3u7c7stqodis6x1s1mge9f72u5v4db8jzh1rba49wc8euh4ljbvsdwqacvy390vg4gsabb805ec9a91i275igxqvh5xkxroml7oc2bwifr47twn9uppjomcvyf == \f\g\n\w\x\i\8\b\b\2\6\y\3\w\o\c\m\w\0\9\k\9\y\w\5\w\0\q\8\k\7\m\y\0\l\o\k\2\p\o\g\r\0\5\5\t\m\d\l\e\a\k\9\3\p\a\0\b\1\d\m\w\e\q\c\u\d\3\4\2\n\1\4\g\1\j\d\1\8\k\k\b\y\i\6\b\v\u\3\n\x\j\f\r\p\p\1\4\h\f\9\n\x\o\c\n\k\z\3\e\n\c\0\r\8\3\s\m\o\r\n\y\e\y\6\u\h\o\h\c\i\r\c\l\k\y\p\p\g\w\3\4\f\n\l\a\3\v\8\n\q\g\n\9\7\c\n\b\f\c\i\6\g\y\b\y\h\x\x\c\t\3\7\n\g\o\9\a\j\4\i\o\7\0\2\1\3\k\d\z\y\a\0\m\r\x\2\h\3\8\m\1\x\b\d\0\l\i\z\p\e\y\5\g\c\4\x\l\i\k\g\h\q\n\l\p\w\v\r\h\8\y\j\1\0\u\i\q\p\y\x\o\2\s\l\k\g\0\m\y\5\s\t\1\4\8\h\7\6\y\7\a\g\5\i\6\y\w\2\3\4\7\u\x\3\y\b\n\n\g\k\m\e\c\9\1\5\q\f\s\t\7\o\m\6\x\7\v\3\m\x\8\8\8\3\h\8\f\f\i\2\m\l\j\l\i\6\4\o\c\9\l\7\v\s\3\j\w\j\0\m\w\2\2\h\h\3\r\i\t\4\6\n\4\y\i\o\o\k\6\u\o\9\l\w\w\d\s\3\s\k\3\4\b\i\e\q\y\q\0\g\q\k\w\3\o\h\n\v\n\5\t\8\r\4\h\v\8\n\n\3\u\7\c\7\s\t\q\o\d\i\s\6\x\1\s\1\m\g\e\9\f\7\2\u\5\v\4\d\b\8\j\z\h\1\r\b\a\4\9\w\c\8\e\u\h\4\l\j\b\v\s\d\w\q\a\c\v\y\3\9\0\v\g\4\g\s\a\b\b\8\0\5\e\c\9\a\9\1\i\2\7\5\i\g\x\q\v\h\5\x\k\x\r\o\m\l\7\o\c\2\b\w\i\f\r\4\7\t\w\n\9\u\p\p\j\o\m\c\v\y\f ]] 00:08:40.253 14:59:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:40.253 14:59:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:40.253 [2024-11-20 14:59:10.986150] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:40.253 [2024-11-20 14:59:10.986296] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70148 ] 00:08:40.511 [2024-11-20 14:59:11.126119] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.511 [2024-11-20 14:59:11.167707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.511  [2024-11-20T14:59:11.574Z] Copying: 512/512 [B] (average 166 kBps) 00:08:40.770 00:08:40.770 14:59:11 -- dd/posix.sh@93 -- # [[ fgnwxi8bb26y3wocmw09k9yw5w0q8k7my0lok2pogr055tmdleak93pa0b1dmweqcud342n14g1jd18kkbyi6bvu3nxjfrpp14hf9nxocnkz3enc0r83smornyey6uhohcirclkyppgw34fnla3v8nqgn97cnbfci6gybyhxxct37ngo9aj4io70213kdzya0mrx2h38m1xbd0lizpey5gc4xlikghqnlpwvrh8yj10uiqpyxo2slkg0my5st148h76y7ag5i6yw2347ux3ybnngkmec915qfst7om6x7v3mx8883h8ffi2mljli64oc9l7vs3jwj0mw22hh3rit46n4yiook6uo9lwwds3sk34bieqyq0gqkw3ohnvn5t8r4hv8nn3u7c7stqodis6x1s1mge9f72u5v4db8jzh1rba49wc8euh4ljbvsdwqacvy390vg4gsabb805ec9a91i275igxqvh5xkxroml7oc2bwifr47twn9uppjomcvyf == \f\g\n\w\x\i\8\b\b\2\6\y\3\w\o\c\m\w\0\9\k\9\y\w\5\w\0\q\8\k\7\m\y\0\l\o\k\2\p\o\g\r\0\5\5\t\m\d\l\e\a\k\9\3\p\a\0\b\1\d\m\w\e\q\c\u\d\3\4\2\n\1\4\g\1\j\d\1\8\k\k\b\y\i\6\b\v\u\3\n\x\j\f\r\p\p\1\4\h\f\9\n\x\o\c\n\k\z\3\e\n\c\0\r\8\3\s\m\o\r\n\y\e\y\6\u\h\o\h\c\i\r\c\l\k\y\p\p\g\w\3\4\f\n\l\a\3\v\8\n\q\g\n\9\7\c\n\b\f\c\i\6\g\y\b\y\h\x\x\c\t\3\7\n\g\o\9\a\j\4\i\o\7\0\2\1\3\k\d\z\y\a\0\m\r\x\2\h\3\8\m\1\x\b\d\0\l\i\z\p\e\y\5\g\c\4\x\l\i\k\g\h\q\n\l\p\w\v\r\h\8\y\j\1\0\u\i\q\p\y\x\o\2\s\l\k\g\0\m\y\5\s\t\1\4\8\h\7\6\y\7\a\g\5\i\6\y\w\2\3\4\7\u\x\3\y\b\n\n\g\k\m\e\c\9\1\5\q\f\s\t\7\o\m\6\x\7\v\3\m\x\8\8\8\3\h\8\f\f\i\2\m\l\j\l\i\6\4\o\c\9\l\7\v\s\3\j\w\j\0\m\w\2\2\h\h\3\r\i\t\4\6\n\4\y\i\o\o\k\6\u\o\9\l\w\w\d\s\3\s\k\3\4\b\i\e\q\y\q\0\g\q\k\w\3\o\h\n\v\n\5\t\8\r\4\h\v\8\n\n\3\u\7\c\7\s\t\q\o\d\i\s\6\x\1\s\1\m\g\e\9\f\7\2\u\5\v\4\d\b\8\j\z\h\1\r\b\a\4\9\w\c\8\e\u\h\4\l\j\b\v\s\d\w\q\a\c\v\y\3\9\0\v\g\4\g\s\a\b\b\8\0\5\e\c\9\a\9\1\i\2\7\5\i\g\x\q\v\h\5\x\k\x\r\o\m\l\7\o\c\2\b\w\i\f\r\4\7\t\w\n\9\u\p\p\j\o\m\c\v\y\f ]] 00:08:40.770 14:59:11 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:40.770 14:59:11 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:40.770 [2024-11-20 14:59:11.427367] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:40.770 [2024-11-20 14:59:11.427468] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70150 ] 00:08:40.770 [2024-11-20 14:59:11.560412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.028 [2024-11-20 14:59:11.595576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.028  [2024-11-20T14:59:11.832Z] Copying: 512/512 [B] (average 500 kBps) 00:08:41.028 00:08:41.028 14:59:11 -- dd/posix.sh@93 -- # [[ fgnwxi8bb26y3wocmw09k9yw5w0q8k7my0lok2pogr055tmdleak93pa0b1dmweqcud342n14g1jd18kkbyi6bvu3nxjfrpp14hf9nxocnkz3enc0r83smornyey6uhohcirclkyppgw34fnla3v8nqgn97cnbfci6gybyhxxct37ngo9aj4io70213kdzya0mrx2h38m1xbd0lizpey5gc4xlikghqnlpwvrh8yj10uiqpyxo2slkg0my5st148h76y7ag5i6yw2347ux3ybnngkmec915qfst7om6x7v3mx8883h8ffi2mljli64oc9l7vs3jwj0mw22hh3rit46n4yiook6uo9lwwds3sk34bieqyq0gqkw3ohnvn5t8r4hv8nn3u7c7stqodis6x1s1mge9f72u5v4db8jzh1rba49wc8euh4ljbvsdwqacvy390vg4gsabb805ec9a91i275igxqvh5xkxroml7oc2bwifr47twn9uppjomcvyf == \f\g\n\w\x\i\8\b\b\2\6\y\3\w\o\c\m\w\0\9\k\9\y\w\5\w\0\q\8\k\7\m\y\0\l\o\k\2\p\o\g\r\0\5\5\t\m\d\l\e\a\k\9\3\p\a\0\b\1\d\m\w\e\q\c\u\d\3\4\2\n\1\4\g\1\j\d\1\8\k\k\b\y\i\6\b\v\u\3\n\x\j\f\r\p\p\1\4\h\f\9\n\x\o\c\n\k\z\3\e\n\c\0\r\8\3\s\m\o\r\n\y\e\y\6\u\h\o\h\c\i\r\c\l\k\y\p\p\g\w\3\4\f\n\l\a\3\v\8\n\q\g\n\9\7\c\n\b\f\c\i\6\g\y\b\y\h\x\x\c\t\3\7\n\g\o\9\a\j\4\i\o\7\0\2\1\3\k\d\z\y\a\0\m\r\x\2\h\3\8\m\1\x\b\d\0\l\i\z\p\e\y\5\g\c\4\x\l\i\k\g\h\q\n\l\p\w\v\r\h\8\y\j\1\0\u\i\q\p\y\x\o\2\s\l\k\g\0\m\y\5\s\t\1\4\8\h\7\6\y\7\a\g\5\i\6\y\w\2\3\4\7\u\x\3\y\b\n\n\g\k\m\e\c\9\1\5\q\f\s\t\7\o\m\6\x\7\v\3\m\x\8\8\8\3\h\8\f\f\i\2\m\l\j\l\i\6\4\o\c\9\l\7\v\s\3\j\w\j\0\m\w\2\2\h\h\3\r\i\t\4\6\n\4\y\i\o\o\k\6\u\o\9\l\w\w\d\s\3\s\k\3\4\b\i\e\q\y\q\0\g\q\k\w\3\o\h\n\v\n\5\t\8\r\4\h\v\8\n\n\3\u\7\c\7\s\t\q\o\d\i\s\6\x\1\s\1\m\g\e\9\f\7\2\u\5\v\4\d\b\8\j\z\h\1\r\b\a\4\9\w\c\8\e\u\h\4\l\j\b\v\s\d\w\q\a\c\v\y\3\9\0\v\g\4\g\s\a\b\b\8\0\5\e\c\9\a\9\1\i\2\7\5\i\g\x\q\v\h\5\x\k\x\r\o\m\l\7\o\c\2\b\w\i\f\r\4\7\t\w\n\9\u\p\p\j\o\m\c\v\y\f ]] 00:08:41.028 14:59:11 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:41.028 14:59:11 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:41.028 14:59:11 -- dd/common.sh@98 -- # xtrace_disable 00:08:41.028 14:59:11 -- common/autotest_common.sh@10 -- # set +x 00:08:41.028 14:59:11 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:41.028 14:59:11 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:41.286 [2024-11-20 14:59:11.852189] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:41.286 [2024-11-20 14:59:11.852289] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70163 ] 00:08:41.286 [2024-11-20 14:59:11.982476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.286 [2024-11-20 14:59:12.017755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.286  [2024-11-20T14:59:12.349Z] Copying: 512/512 [B] (average 500 kBps) 00:08:41.545 00:08:41.545 14:59:12 -- dd/posix.sh@93 -- # [[ 2tw3tuoeisfyltfbnk9q1dfze8kh2d15ofnju9vk1d9sfha34nwtdxdmnz101uuj09s47sr3p9eu9oa2or0rg8d5a8k6z3rewps1f9eptuvcv07vew0db7qfb9tgyxqz4z93y5ryp9vfbiif72f5gx38mfyzi1y8x83xipo3of42bwm6r17mqjarzmry9u85gw187v05pve2dx57jkxdovahrglnxm1c1v8jxrijd60cmlwsrpsdzfp4spc6m2m096mf92vdnyiyxyk3ibs3o18z1r4myguf5p1oly5ew5d1r51izayle2jm5lzxcntpe5wed0q8fq1yqpj41o1ptjydz4c9mqog9rq8ea2pvopjs4ddo5ifk3gk85ps9e8xagq2l5omjk9ku1namhxckkcs0axbaktrqsuz7xq67qsx7bzt2jua6dtsv5cp2dk7apa5cm1w4kl5p2gq9slm2e1fjnnugm57frpzvs8nsapgcsm3ugptbajqb5492858 == \2\t\w\3\t\u\o\e\i\s\f\y\l\t\f\b\n\k\9\q\1\d\f\z\e\8\k\h\2\d\1\5\o\f\n\j\u\9\v\k\1\d\9\s\f\h\a\3\4\n\w\t\d\x\d\m\n\z\1\0\1\u\u\j\0\9\s\4\7\s\r\3\p\9\e\u\9\o\a\2\o\r\0\r\g\8\d\5\a\8\k\6\z\3\r\e\w\p\s\1\f\9\e\p\t\u\v\c\v\0\7\v\e\w\0\d\b\7\q\f\b\9\t\g\y\x\q\z\4\z\9\3\y\5\r\y\p\9\v\f\b\i\i\f\7\2\f\5\g\x\3\8\m\f\y\z\i\1\y\8\x\8\3\x\i\p\o\3\o\f\4\2\b\w\m\6\r\1\7\m\q\j\a\r\z\m\r\y\9\u\8\5\g\w\1\8\7\v\0\5\p\v\e\2\d\x\5\7\j\k\x\d\o\v\a\h\r\g\l\n\x\m\1\c\1\v\8\j\x\r\i\j\d\6\0\c\m\l\w\s\r\p\s\d\z\f\p\4\s\p\c\6\m\2\m\0\9\6\m\f\9\2\v\d\n\y\i\y\x\y\k\3\i\b\s\3\o\1\8\z\1\r\4\m\y\g\u\f\5\p\1\o\l\y\5\e\w\5\d\1\r\5\1\i\z\a\y\l\e\2\j\m\5\l\z\x\c\n\t\p\e\5\w\e\d\0\q\8\f\q\1\y\q\p\j\4\1\o\1\p\t\j\y\d\z\4\c\9\m\q\o\g\9\r\q\8\e\a\2\p\v\o\p\j\s\4\d\d\o\5\i\f\k\3\g\k\8\5\p\s\9\e\8\x\a\g\q\2\l\5\o\m\j\k\9\k\u\1\n\a\m\h\x\c\k\k\c\s\0\a\x\b\a\k\t\r\q\s\u\z\7\x\q\6\7\q\s\x\7\b\z\t\2\j\u\a\6\d\t\s\v\5\c\p\2\d\k\7\a\p\a\5\c\m\1\w\4\k\l\5\p\2\g\q\9\s\l\m\2\e\1\f\j\n\n\u\g\m\5\7\f\r\p\z\v\s\8\n\s\a\p\g\c\s\m\3\u\g\p\t\b\a\j\q\b\5\4\9\2\8\5\8 ]] 00:08:41.545 14:59:12 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:41.545 14:59:12 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:41.545 [2024-11-20 14:59:12.265233] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:41.545 [2024-11-20 14:59:12.265382] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70165 ] 00:08:41.803 [2024-11-20 14:59:12.395177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.803 [2024-11-20 14:59:12.431719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.803  [2024-11-20T14:59:12.866Z] Copying: 512/512 [B] (average 500 kBps) 00:08:42.062 00:08:42.062 14:59:12 -- dd/posix.sh@93 -- # [[ 2tw3tuoeisfyltfbnk9q1dfze8kh2d15ofnju9vk1d9sfha34nwtdxdmnz101uuj09s47sr3p9eu9oa2or0rg8d5a8k6z3rewps1f9eptuvcv07vew0db7qfb9tgyxqz4z93y5ryp9vfbiif72f5gx38mfyzi1y8x83xipo3of42bwm6r17mqjarzmry9u85gw187v05pve2dx57jkxdovahrglnxm1c1v8jxrijd60cmlwsrpsdzfp4spc6m2m096mf92vdnyiyxyk3ibs3o18z1r4myguf5p1oly5ew5d1r51izayle2jm5lzxcntpe5wed0q8fq1yqpj41o1ptjydz4c9mqog9rq8ea2pvopjs4ddo5ifk3gk85ps9e8xagq2l5omjk9ku1namhxckkcs0axbaktrqsuz7xq67qsx7bzt2jua6dtsv5cp2dk7apa5cm1w4kl5p2gq9slm2e1fjnnugm57frpzvs8nsapgcsm3ugptbajqb5492858 == \2\t\w\3\t\u\o\e\i\s\f\y\l\t\f\b\n\k\9\q\1\d\f\z\e\8\k\h\2\d\1\5\o\f\n\j\u\9\v\k\1\d\9\s\f\h\a\3\4\n\w\t\d\x\d\m\n\z\1\0\1\u\u\j\0\9\s\4\7\s\r\3\p\9\e\u\9\o\a\2\o\r\0\r\g\8\d\5\a\8\k\6\z\3\r\e\w\p\s\1\f\9\e\p\t\u\v\c\v\0\7\v\e\w\0\d\b\7\q\f\b\9\t\g\y\x\q\z\4\z\9\3\y\5\r\y\p\9\v\f\b\i\i\f\7\2\f\5\g\x\3\8\m\f\y\z\i\1\y\8\x\8\3\x\i\p\o\3\o\f\4\2\b\w\m\6\r\1\7\m\q\j\a\r\z\m\r\y\9\u\8\5\g\w\1\8\7\v\0\5\p\v\e\2\d\x\5\7\j\k\x\d\o\v\a\h\r\g\l\n\x\m\1\c\1\v\8\j\x\r\i\j\d\6\0\c\m\l\w\s\r\p\s\d\z\f\p\4\s\p\c\6\m\2\m\0\9\6\m\f\9\2\v\d\n\y\i\y\x\y\k\3\i\b\s\3\o\1\8\z\1\r\4\m\y\g\u\f\5\p\1\o\l\y\5\e\w\5\d\1\r\5\1\i\z\a\y\l\e\2\j\m\5\l\z\x\c\n\t\p\e\5\w\e\d\0\q\8\f\q\1\y\q\p\j\4\1\o\1\p\t\j\y\d\z\4\c\9\m\q\o\g\9\r\q\8\e\a\2\p\v\o\p\j\s\4\d\d\o\5\i\f\k\3\g\k\8\5\p\s\9\e\8\x\a\g\q\2\l\5\o\m\j\k\9\k\u\1\n\a\m\h\x\c\k\k\c\s\0\a\x\b\a\k\t\r\q\s\u\z\7\x\q\6\7\q\s\x\7\b\z\t\2\j\u\a\6\d\t\s\v\5\c\p\2\d\k\7\a\p\a\5\c\m\1\w\4\k\l\5\p\2\g\q\9\s\l\m\2\e\1\f\j\n\n\u\g\m\5\7\f\r\p\z\v\s\8\n\s\a\p\g\c\s\m\3\u\g\p\t\b\a\j\q\b\5\4\9\2\8\5\8 ]] 00:08:42.062 14:59:12 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:42.062 14:59:12 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:42.062 [2024-11-20 14:59:12.677444] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:42.062 [2024-11-20 14:59:12.677995] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70173 ] 00:08:42.062 [2024-11-20 14:59:12.811616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.062 [2024-11-20 14:59:12.854222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.321  [2024-11-20T14:59:13.125Z] Copying: 512/512 [B] (average 500 kBps) 00:08:42.321 00:08:42.321 14:59:13 -- dd/posix.sh@93 -- # [[ 2tw3tuoeisfyltfbnk9q1dfze8kh2d15ofnju9vk1d9sfha34nwtdxdmnz101uuj09s47sr3p9eu9oa2or0rg8d5a8k6z3rewps1f9eptuvcv07vew0db7qfb9tgyxqz4z93y5ryp9vfbiif72f5gx38mfyzi1y8x83xipo3of42bwm6r17mqjarzmry9u85gw187v05pve2dx57jkxdovahrglnxm1c1v8jxrijd60cmlwsrpsdzfp4spc6m2m096mf92vdnyiyxyk3ibs3o18z1r4myguf5p1oly5ew5d1r51izayle2jm5lzxcntpe5wed0q8fq1yqpj41o1ptjydz4c9mqog9rq8ea2pvopjs4ddo5ifk3gk85ps9e8xagq2l5omjk9ku1namhxckkcs0axbaktrqsuz7xq67qsx7bzt2jua6dtsv5cp2dk7apa5cm1w4kl5p2gq9slm2e1fjnnugm57frpzvs8nsapgcsm3ugptbajqb5492858 == \2\t\w\3\t\u\o\e\i\s\f\y\l\t\f\b\n\k\9\q\1\d\f\z\e\8\k\h\2\d\1\5\o\f\n\j\u\9\v\k\1\d\9\s\f\h\a\3\4\n\w\t\d\x\d\m\n\z\1\0\1\u\u\j\0\9\s\4\7\s\r\3\p\9\e\u\9\o\a\2\o\r\0\r\g\8\d\5\a\8\k\6\z\3\r\e\w\p\s\1\f\9\e\p\t\u\v\c\v\0\7\v\e\w\0\d\b\7\q\f\b\9\t\g\y\x\q\z\4\z\9\3\y\5\r\y\p\9\v\f\b\i\i\f\7\2\f\5\g\x\3\8\m\f\y\z\i\1\y\8\x\8\3\x\i\p\o\3\o\f\4\2\b\w\m\6\r\1\7\m\q\j\a\r\z\m\r\y\9\u\8\5\g\w\1\8\7\v\0\5\p\v\e\2\d\x\5\7\j\k\x\d\o\v\a\h\r\g\l\n\x\m\1\c\1\v\8\j\x\r\i\j\d\6\0\c\m\l\w\s\r\p\s\d\z\f\p\4\s\p\c\6\m\2\m\0\9\6\m\f\9\2\v\d\n\y\i\y\x\y\k\3\i\b\s\3\o\1\8\z\1\r\4\m\y\g\u\f\5\p\1\o\l\y\5\e\w\5\d\1\r\5\1\i\z\a\y\l\e\2\j\m\5\l\z\x\c\n\t\p\e\5\w\e\d\0\q\8\f\q\1\y\q\p\j\4\1\o\1\p\t\j\y\d\z\4\c\9\m\q\o\g\9\r\q\8\e\a\2\p\v\o\p\j\s\4\d\d\o\5\i\f\k\3\g\k\8\5\p\s\9\e\8\x\a\g\q\2\l\5\o\m\j\k\9\k\u\1\n\a\m\h\x\c\k\k\c\s\0\a\x\b\a\k\t\r\q\s\u\z\7\x\q\6\7\q\s\x\7\b\z\t\2\j\u\a\6\d\t\s\v\5\c\p\2\d\k\7\a\p\a\5\c\m\1\w\4\k\l\5\p\2\g\q\9\s\l\m\2\e\1\f\j\n\n\u\g\m\5\7\f\r\p\z\v\s\8\n\s\a\p\g\c\s\m\3\u\g\p\t\b\a\j\q\b\5\4\9\2\8\5\8 ]] 00:08:42.321 14:59:13 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:42.321 14:59:13 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:42.321 [2024-11-20 14:59:13.093489] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:42.321 [2024-11-20 14:59:13.093589] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70180 ] 00:08:42.579 [2024-11-20 14:59:13.224431] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.579 [2024-11-20 14:59:13.259438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.579  [2024-11-20T14:59:13.641Z] Copying: 512/512 [B] (average 500 kBps) 00:08:42.837 00:08:42.837 14:59:13 -- dd/posix.sh@93 -- # [[ 2tw3tuoeisfyltfbnk9q1dfze8kh2d15ofnju9vk1d9sfha34nwtdxdmnz101uuj09s47sr3p9eu9oa2or0rg8d5a8k6z3rewps1f9eptuvcv07vew0db7qfb9tgyxqz4z93y5ryp9vfbiif72f5gx38mfyzi1y8x83xipo3of42bwm6r17mqjarzmry9u85gw187v05pve2dx57jkxdovahrglnxm1c1v8jxrijd60cmlwsrpsdzfp4spc6m2m096mf92vdnyiyxyk3ibs3o18z1r4myguf5p1oly5ew5d1r51izayle2jm5lzxcntpe5wed0q8fq1yqpj41o1ptjydz4c9mqog9rq8ea2pvopjs4ddo5ifk3gk85ps9e8xagq2l5omjk9ku1namhxckkcs0axbaktrqsuz7xq67qsx7bzt2jua6dtsv5cp2dk7apa5cm1w4kl5p2gq9slm2e1fjnnugm57frpzvs8nsapgcsm3ugptbajqb5492858 == \2\t\w\3\t\u\o\e\i\s\f\y\l\t\f\b\n\k\9\q\1\d\f\z\e\8\k\h\2\d\1\5\o\f\n\j\u\9\v\k\1\d\9\s\f\h\a\3\4\n\w\t\d\x\d\m\n\z\1\0\1\u\u\j\0\9\s\4\7\s\r\3\p\9\e\u\9\o\a\2\o\r\0\r\g\8\d\5\a\8\k\6\z\3\r\e\w\p\s\1\f\9\e\p\t\u\v\c\v\0\7\v\e\w\0\d\b\7\q\f\b\9\t\g\y\x\q\z\4\z\9\3\y\5\r\y\p\9\v\f\b\i\i\f\7\2\f\5\g\x\3\8\m\f\y\z\i\1\y\8\x\8\3\x\i\p\o\3\o\f\4\2\b\w\m\6\r\1\7\m\q\j\a\r\z\m\r\y\9\u\8\5\g\w\1\8\7\v\0\5\p\v\e\2\d\x\5\7\j\k\x\d\o\v\a\h\r\g\l\n\x\m\1\c\1\v\8\j\x\r\i\j\d\6\0\c\m\l\w\s\r\p\s\d\z\f\p\4\s\p\c\6\m\2\m\0\9\6\m\f\9\2\v\d\n\y\i\y\x\y\k\3\i\b\s\3\o\1\8\z\1\r\4\m\y\g\u\f\5\p\1\o\l\y\5\e\w\5\d\1\r\5\1\i\z\a\y\l\e\2\j\m\5\l\z\x\c\n\t\p\e\5\w\e\d\0\q\8\f\q\1\y\q\p\j\4\1\o\1\p\t\j\y\d\z\4\c\9\m\q\o\g\9\r\q\8\e\a\2\p\v\o\p\j\s\4\d\d\o\5\i\f\k\3\g\k\8\5\p\s\9\e\8\x\a\g\q\2\l\5\o\m\j\k\9\k\u\1\n\a\m\h\x\c\k\k\c\s\0\a\x\b\a\k\t\r\q\s\u\z\7\x\q\6\7\q\s\x\7\b\z\t\2\j\u\a\6\d\t\s\v\5\c\p\2\d\k\7\a\p\a\5\c\m\1\w\4\k\l\5\p\2\g\q\9\s\l\m\2\e\1\f\j\n\n\u\g\m\5\7\f\r\p\z\v\s\8\n\s\a\p\g\c\s\m\3\u\g\p\t\b\a\j\q\b\5\4\9\2\8\5\8 ]] 00:08:42.837 00:08:42.837 real 0m3.398s 00:08:42.837 user 0m1.653s 00:08:42.837 sys 0m0.760s 00:08:42.837 ************************************ 00:08:42.837 END TEST dd_flags_misc 00:08:42.837 ************************************ 00:08:42.837 14:59:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:42.837 14:59:13 -- common/autotest_common.sh@10 -- # set +x 00:08:42.837 14:59:13 -- dd/posix.sh@131 -- # tests_forced_aio 00:08:42.837 14:59:13 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:42.837 * Second test run, disabling liburing, forcing AIO 00:08:42.837 14:59:13 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:42.837 14:59:13 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:42.837 14:59:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:42.837 14:59:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:42.837 14:59:13 -- common/autotest_common.sh@10 -- # set +x 00:08:42.837 ************************************ 00:08:42.837 START TEST dd_flag_append_forced_aio 00:08:42.837 ************************************ 00:08:42.837 14:59:13 -- common/autotest_common.sh@1114 -- # append 00:08:42.837 14:59:13 -- dd/posix.sh@16 -- # local dump0 00:08:42.837 14:59:13 -- dd/posix.sh@17 -- # local dump1 00:08:42.837 14:59:13 -- dd/posix.sh@19 -- # gen_bytes 32 00:08:42.838 14:59:13 -- dd/common.sh@98 -- # xtrace_disable 00:08:42.838 14:59:13 -- common/autotest_common.sh@10 -- # set +x 00:08:42.838 14:59:13 -- dd/posix.sh@19 -- # dump0=1yhjar35z3z5hasom13hxb2xp8g9pnh9 00:08:42.838 14:59:13 -- dd/posix.sh@20 -- # gen_bytes 32 00:08:42.838 14:59:13 -- dd/common.sh@98 -- # xtrace_disable 00:08:42.838 14:59:13 -- common/autotest_common.sh@10 -- # set +x 00:08:42.838 14:59:13 -- dd/posix.sh@20 -- # dump1=hxvcva0lvhurbghy6p82dty94if47qyb 00:08:42.838 14:59:13 -- dd/posix.sh@22 -- # printf %s 1yhjar35z3z5hasom13hxb2xp8g9pnh9 00:08:42.838 14:59:13 -- dd/posix.sh@23 -- # printf %s hxvcva0lvhurbghy6p82dty94if47qyb 00:08:42.838 14:59:13 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:42.838 [2024-11-20 14:59:13.576142] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:42.838 [2024-11-20 14:59:13.576291] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70207 ] 00:08:43.095 [2024-11-20 14:59:13.722439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.095 [2024-11-20 14:59:13.760959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.095  [2024-11-20T14:59:14.158Z] Copying: 32/32 [B] (average 31 kBps) 00:08:43.354 00:08:43.354 ************************************ 00:08:43.354 END TEST dd_flag_append_forced_aio 00:08:43.354 ************************************ 00:08:43.354 14:59:13 -- dd/posix.sh@27 -- # [[ hxvcva0lvhurbghy6p82dty94if47qyb1yhjar35z3z5hasom13hxb2xp8g9pnh9 == \h\x\v\c\v\a\0\l\v\h\u\r\b\g\h\y\6\p\8\2\d\t\y\9\4\i\f\4\7\q\y\b\1\y\h\j\a\r\3\5\z\3\z\5\h\a\s\o\m\1\3\h\x\b\2\x\p\8\g\9\p\n\h\9 ]] 00:08:43.354 00:08:43.354 real 0m0.455s 00:08:43.354 user 0m0.212s 00:08:43.354 sys 0m0.119s 00:08:43.354 14:59:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:43.354 14:59:13 -- common/autotest_common.sh@10 -- # set +x 00:08:43.354 14:59:13 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:43.354 14:59:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:43.354 14:59:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.354 14:59:13 -- common/autotest_common.sh@10 -- # set +x 00:08:43.354 ************************************ 00:08:43.354 START TEST dd_flag_directory_forced_aio 00:08:43.354 ************************************ 00:08:43.354 14:59:14 -- common/autotest_common.sh@1114 -- # directory 00:08:43.354 14:59:14 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:43.354 14:59:14 -- common/autotest_common.sh@650 -- # local es=0 00:08:43.354 14:59:14 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:43.354 14:59:14 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.354 14:59:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.354 14:59:14 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.354 14:59:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.354 14:59:14 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.354 14:59:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.354 14:59:14 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.354 14:59:14 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:43.354 14:59:14 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:43.354 [2024-11-20 14:59:14.043963] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:43.354 [2024-11-20 14:59:14.044238] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70233 ] 00:08:43.614 [2024-11-20 14:59:14.174408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.614 [2024-11-20 14:59:14.214687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.614 [2024-11-20 14:59:14.260305] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:43.614 [2024-11-20 14:59:14.260369] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:43.614 [2024-11-20 14:59:14.260385] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:43.614 [2024-11-20 14:59:14.323141] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:43.614 14:59:14 -- common/autotest_common.sh@653 -- # es=236 00:08:43.614 14:59:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:43.614 14:59:14 -- common/autotest_common.sh@662 -- # es=108 00:08:43.614 14:59:14 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:43.614 14:59:14 -- common/autotest_common.sh@670 -- # es=1 00:08:43.614 14:59:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:43.614 14:59:14 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:43.614 14:59:14 -- common/autotest_common.sh@650 -- # local es=0 00:08:43.614 14:59:14 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:43.614 14:59:14 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.614 14:59:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.614 14:59:14 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.614 14:59:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.614 14:59:14 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.614 14:59:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:43.614 14:59:14 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.614 14:59:14 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:43.614 14:59:14 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:43.873 [2024-11-20 14:59:14.455587] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:43.873 [2024-11-20 14:59:14.455759] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70237 ] 00:08:43.873 [2024-11-20 14:59:14.598380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.873 [2024-11-20 14:59:14.633245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.132 [2024-11-20 14:59:14.677715] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:44.132 [2024-11-20 14:59:14.677965] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:44.132 [2024-11-20 14:59:14.677985] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:44.132 [2024-11-20 14:59:14.739108] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:44.132 14:59:14 -- common/autotest_common.sh@653 -- # es=236 00:08:44.132 14:59:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:44.132 14:59:14 -- common/autotest_common.sh@662 -- # es=108 00:08:44.132 14:59:14 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:44.132 14:59:14 -- common/autotest_common.sh@670 -- # es=1 00:08:44.132 14:59:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:44.132 00:08:44.132 real 0m0.806s 00:08:44.132 user 0m0.409s 00:08:44.132 sys 0m0.187s 00:08:44.132 14:59:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:44.132 ************************************ 00:08:44.132 END TEST dd_flag_directory_forced_aio 00:08:44.132 ************************************ 00:08:44.132 14:59:14 -- common/autotest_common.sh@10 -- # set +x 00:08:44.132 14:59:14 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:44.132 14:59:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:44.132 14:59:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:44.132 14:59:14 -- common/autotest_common.sh@10 -- # set +x 00:08:44.132 ************************************ 00:08:44.132 START TEST dd_flag_nofollow_forced_aio 00:08:44.132 ************************************ 00:08:44.132 14:59:14 -- common/autotest_common.sh@1114 -- # nofollow 00:08:44.132 14:59:14 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:44.132 14:59:14 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:44.132 14:59:14 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:44.132 14:59:14 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:44.133 14:59:14 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:44.133 14:59:14 -- common/autotest_common.sh@650 -- # local es=0 00:08:44.133 14:59:14 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:44.133 14:59:14 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.133 14:59:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:44.133 14:59:14 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.133 14:59:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:44.133 14:59:14 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.133 14:59:14 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:44.133 14:59:14 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.133 14:59:14 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:44.133 14:59:14 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:44.133 [2024-11-20 14:59:14.913986] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:44.133 [2024-11-20 14:59:14.914120] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70270 ] 00:08:44.392 [2024-11-20 14:59:15.052381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.392 [2024-11-20 14:59:15.094122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.392 [2024-11-20 14:59:15.147188] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:44.392 [2024-11-20 14:59:15.147273] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:44.392 [2024-11-20 14:59:15.147298] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:44.652 [2024-11-20 14:59:15.221076] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:44.652 14:59:15 -- common/autotest_common.sh@653 -- # es=216 00:08:44.652 14:59:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:44.652 14:59:15 -- common/autotest_common.sh@662 -- # es=88 00:08:44.652 14:59:15 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:44.652 14:59:15 -- common/autotest_common.sh@670 -- # es=1 00:08:44.652 14:59:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:44.652 14:59:15 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:44.652 14:59:15 -- common/autotest_common.sh@650 -- # local es=0 00:08:44.652 14:59:15 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:44.652 14:59:15 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.652 14:59:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:44.652 14:59:15 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.652 14:59:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:44.652 14:59:15 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.652 14:59:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:44.652 14:59:15 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.652 14:59:15 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:44.652 14:59:15 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:44.652 [2024-11-20 14:59:15.361216] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:44.652 [2024-11-20 14:59:15.361597] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70275 ] 00:08:44.913 [2024-11-20 14:59:15.500423] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.913 [2024-11-20 14:59:15.536517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.913 [2024-11-20 14:59:15.583208] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:44.913 [2024-11-20 14:59:15.583267] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:44.913 [2024-11-20 14:59:15.583285] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:44.913 [2024-11-20 14:59:15.649007] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:45.172 14:59:15 -- common/autotest_common.sh@653 -- # es=216 00:08:45.172 14:59:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:45.172 14:59:15 -- common/autotest_common.sh@662 -- # es=88 00:08:45.172 14:59:15 -- common/autotest_common.sh@663 -- # case "$es" in 00:08:45.172 14:59:15 -- common/autotest_common.sh@670 -- # es=1 00:08:45.172 14:59:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:45.172 14:59:15 -- dd/posix.sh@46 -- # gen_bytes 512 00:08:45.172 14:59:15 -- dd/common.sh@98 -- # xtrace_disable 00:08:45.172 14:59:15 -- common/autotest_common.sh@10 -- # set +x 00:08:45.172 14:59:15 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:45.172 [2024-11-20 14:59:15.762767] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:45.172 [2024-11-20 14:59:15.762872] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70283 ] 00:08:45.172 [2024-11-20 14:59:15.895501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.172 [2024-11-20 14:59:15.932175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.431  [2024-11-20T14:59:16.235Z] Copying: 512/512 [B] (average 500 kBps) 00:08:45.431 00:08:45.432 14:59:16 -- dd/posix.sh@49 -- # [[ s4gippsdk6t0o08defc3wsnun274p8kvtki6ndxajjmmsxh35wg63etiag5u7kkvx5dmwphq063ib3ziei7fxbkelyqj6kshn2pfcedipwdq79fes93cs2njgjvdfstx059oh4l40k7ik2xiwss24d2ga76yozvrl8o7ulmbnmhjwbua8zlrdwlgf5iabblo50a14gslnrvubxbek1p0ixsbuecilspnrqsqerut71nubdxr1sfg3y42tjh6vz7eb051ju693q8mdf02icivnnwz7c7yl8dhteisup98jy9rg3h4sji4en0rnw3m2cpqy80yie9m89zwy4wkkx50vkd4qdkj0nybpn3w35r3sni8a409oiojth84gl83trh057ku6q9fwcy4m71afhngw0eo7phtgtldngbqbam76ikmrcqtc35ascbnn3shdwqv13xrk38qy6c90ytz1tb5meafdywbugk26e0vqono2qcz526bb5rexsx49rvz1ufz == \s\4\g\i\p\p\s\d\k\6\t\0\o\0\8\d\e\f\c\3\w\s\n\u\n\2\7\4\p\8\k\v\t\k\i\6\n\d\x\a\j\j\m\m\s\x\h\3\5\w\g\6\3\e\t\i\a\g\5\u\7\k\k\v\x\5\d\m\w\p\h\q\0\6\3\i\b\3\z\i\e\i\7\f\x\b\k\e\l\y\q\j\6\k\s\h\n\2\p\f\c\e\d\i\p\w\d\q\7\9\f\e\s\9\3\c\s\2\n\j\g\j\v\d\f\s\t\x\0\5\9\o\h\4\l\4\0\k\7\i\k\2\x\i\w\s\s\2\4\d\2\g\a\7\6\y\o\z\v\r\l\8\o\7\u\l\m\b\n\m\h\j\w\b\u\a\8\z\l\r\d\w\l\g\f\5\i\a\b\b\l\o\5\0\a\1\4\g\s\l\n\r\v\u\b\x\b\e\k\1\p\0\i\x\s\b\u\e\c\i\l\s\p\n\r\q\s\q\e\r\u\t\7\1\n\u\b\d\x\r\1\s\f\g\3\y\4\2\t\j\h\6\v\z\7\e\b\0\5\1\j\u\6\9\3\q\8\m\d\f\0\2\i\c\i\v\n\n\w\z\7\c\7\y\l\8\d\h\t\e\i\s\u\p\9\8\j\y\9\r\g\3\h\4\s\j\i\4\e\n\0\r\n\w\3\m\2\c\p\q\y\8\0\y\i\e\9\m\8\9\z\w\y\4\w\k\k\x\5\0\v\k\d\4\q\d\k\j\0\n\y\b\p\n\3\w\3\5\r\3\s\n\i\8\a\4\0\9\o\i\o\j\t\h\8\4\g\l\8\3\t\r\h\0\5\7\k\u\6\q\9\f\w\c\y\4\m\7\1\a\f\h\n\g\w\0\e\o\7\p\h\t\g\t\l\d\n\g\b\q\b\a\m\7\6\i\k\m\r\c\q\t\c\3\5\a\s\c\b\n\n\3\s\h\d\w\q\v\1\3\x\r\k\3\8\q\y\6\c\9\0\y\t\z\1\t\b\5\m\e\a\f\d\y\w\b\u\g\k\2\6\e\0\v\q\o\n\o\2\q\c\z\5\2\6\b\b\5\r\e\x\s\x\4\9\r\v\z\1\u\f\z ]] 00:08:45.432 00:08:45.432 real 0m1.281s 00:08:45.432 user 0m0.642s 00:08:45.432 sys 0m0.307s 00:08:45.432 14:59:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:45.432 ************************************ 00:08:45.432 END TEST dd_flag_nofollow_forced_aio 00:08:45.432 ************************************ 00:08:45.432 14:59:16 -- common/autotest_common.sh@10 -- # set +x 00:08:45.432 14:59:16 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:45.432 14:59:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:45.432 14:59:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:45.432 14:59:16 -- common/autotest_common.sh@10 -- # set +x 00:08:45.432 ************************************ 00:08:45.432 START TEST dd_flag_noatime_forced_aio 00:08:45.432 ************************************ 00:08:45.432 14:59:16 -- common/autotest_common.sh@1114 -- # noatime 00:08:45.432 14:59:16 -- dd/posix.sh@53 -- # local atime_if 00:08:45.432 14:59:16 -- dd/posix.sh@54 -- # local atime_of 00:08:45.432 14:59:16 -- dd/posix.sh@58 -- # gen_bytes 512 00:08:45.432 14:59:16 -- dd/common.sh@98 -- # xtrace_disable 00:08:45.432 14:59:16 -- common/autotest_common.sh@10 -- # set +x 00:08:45.432 14:59:16 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:45.432 14:59:16 -- dd/posix.sh@60 -- # atime_if=1732114755 00:08:45.432 14:59:16 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:45.432 14:59:16 -- dd/posix.sh@61 -- # atime_of=1732114756 00:08:45.432 14:59:16 -- dd/posix.sh@66 -- # sleep 1 00:08:46.809 14:59:17 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:46.809 [2024-11-20 14:59:17.235485] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:46.809 [2024-11-20 14:59:17.235829] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70323 ] 00:08:46.809 [2024-11-20 14:59:17.365828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.809 [2024-11-20 14:59:17.406587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.809  [2024-11-20T14:59:17.613Z] Copying: 512/512 [B] (average 500 kBps) 00:08:46.809 00:08:47.068 14:59:17 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:47.068 14:59:17 -- dd/posix.sh@69 -- # (( atime_if == 1732114755 )) 00:08:47.068 14:59:17 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:47.068 14:59:17 -- dd/posix.sh@70 -- # (( atime_of == 1732114756 )) 00:08:47.068 14:59:17 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:47.068 [2024-11-20 14:59:17.661140] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:47.068 [2024-11-20 14:59:17.661237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70335 ] 00:08:47.068 [2024-11-20 14:59:17.793708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.068 [2024-11-20 14:59:17.828681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.327  [2024-11-20T14:59:18.131Z] Copying: 512/512 [B] (average 500 kBps) 00:08:47.327 00:08:47.327 14:59:18 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:47.327 14:59:18 -- dd/posix.sh@73 -- # (( atime_if < 1732114757 )) 00:08:47.327 00:08:47.327 real 0m1.863s 00:08:47.327 user 0m0.409s 00:08:47.327 sys 0m0.204s 00:08:47.327 14:59:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:47.327 ************************************ 00:08:47.327 END TEST dd_flag_noatime_forced_aio 00:08:47.327 ************************************ 00:08:47.327 14:59:18 -- common/autotest_common.sh@10 -- # set +x 00:08:47.327 14:59:18 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:47.327 14:59:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:47.327 14:59:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:47.327 14:59:18 -- common/autotest_common.sh@10 -- # set +x 00:08:47.327 ************************************ 00:08:47.327 START TEST dd_flags_misc_forced_aio 00:08:47.327 ************************************ 00:08:47.327 14:59:18 -- common/autotest_common.sh@1114 -- # io 00:08:47.327 14:59:18 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:47.327 14:59:18 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:47.327 14:59:18 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:47.327 14:59:18 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:47.327 14:59:18 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:47.327 14:59:18 -- dd/common.sh@98 -- # xtrace_disable 00:08:47.327 14:59:18 -- common/autotest_common.sh@10 -- # set +x 00:08:47.327 14:59:18 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:47.327 14:59:18 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:47.327 [2024-11-20 14:59:18.127742] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:47.327 [2024-11-20 14:59:18.127841] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70361 ] 00:08:47.585 [2024-11-20 14:59:18.258027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.585 [2024-11-20 14:59:18.300539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.585  [2024-11-20T14:59:18.648Z] Copying: 512/512 [B] (average 500 kBps) 00:08:47.844 00:08:47.844 14:59:18 -- dd/posix.sh@93 -- # [[ u4y6h2o98dch0n9a6fsxf5t0zujcmte8j0q9c0zatoietcvze4f72hr8ul2lt84i7psx0v12vonkjo9f5k7hc8ojwjfw1ot247kh7faop7hnu2niz5iyeqh6j1j3wxitthb6a5pemw8vo8lj2tyb6rc47j2n30lnowihvnarlhoaovq8j6ppljatjxhau22jkjryzfrogw495vqj7vmcl9e39qdt7bxfbwe079rqiqoujwkc32skg17jl3bw67m9grv9y3yalut24keo0pmc5bpukwn2f7qt4kqij3of4jktmh4n6xgeclx79zuclozof8i2iurb18rnrgeoe07dcdcvk93ceupnt0pfeqmbsz3urxrhh5skjcexqh0e51eg7kasashue333y4w16g1eyek88ref40ooxebbxt76qg5e6db3nhd3w4tis1kbfff63xg002bbrmv0iztry3ad77xtbask87tkqbkh733gu137cg9xha3fp3sgdv81j80l == \u\4\y\6\h\2\o\9\8\d\c\h\0\n\9\a\6\f\s\x\f\5\t\0\z\u\j\c\m\t\e\8\j\0\q\9\c\0\z\a\t\o\i\e\t\c\v\z\e\4\f\7\2\h\r\8\u\l\2\l\t\8\4\i\7\p\s\x\0\v\1\2\v\o\n\k\j\o\9\f\5\k\7\h\c\8\o\j\w\j\f\w\1\o\t\2\4\7\k\h\7\f\a\o\p\7\h\n\u\2\n\i\z\5\i\y\e\q\h\6\j\1\j\3\w\x\i\t\t\h\b\6\a\5\p\e\m\w\8\v\o\8\l\j\2\t\y\b\6\r\c\4\7\j\2\n\3\0\l\n\o\w\i\h\v\n\a\r\l\h\o\a\o\v\q\8\j\6\p\p\l\j\a\t\j\x\h\a\u\2\2\j\k\j\r\y\z\f\r\o\g\w\4\9\5\v\q\j\7\v\m\c\l\9\e\3\9\q\d\t\7\b\x\f\b\w\e\0\7\9\r\q\i\q\o\u\j\w\k\c\3\2\s\k\g\1\7\j\l\3\b\w\6\7\m\9\g\r\v\9\y\3\y\a\l\u\t\2\4\k\e\o\0\p\m\c\5\b\p\u\k\w\n\2\f\7\q\t\4\k\q\i\j\3\o\f\4\j\k\t\m\h\4\n\6\x\g\e\c\l\x\7\9\z\u\c\l\o\z\o\f\8\i\2\i\u\r\b\1\8\r\n\r\g\e\o\e\0\7\d\c\d\c\v\k\9\3\c\e\u\p\n\t\0\p\f\e\q\m\b\s\z\3\u\r\x\r\h\h\5\s\k\j\c\e\x\q\h\0\e\5\1\e\g\7\k\a\s\a\s\h\u\e\3\3\3\y\4\w\1\6\g\1\e\y\e\k\8\8\r\e\f\4\0\o\o\x\e\b\b\x\t\7\6\q\g\5\e\6\d\b\3\n\h\d\3\w\4\t\i\s\1\k\b\f\f\f\6\3\x\g\0\0\2\b\b\r\m\v\0\i\z\t\r\y\3\a\d\7\7\x\t\b\a\s\k\8\7\t\k\q\b\k\h\7\3\3\g\u\1\3\7\c\g\9\x\h\a\3\f\p\3\s\g\d\v\8\1\j\8\0\l ]] 00:08:47.844 14:59:18 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:47.844 14:59:18 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:47.844 [2024-11-20 14:59:18.568943] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:47.844 [2024-11-20 14:59:18.569039] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70363 ] 00:08:48.102 [2024-11-20 14:59:18.706978] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.102 [2024-11-20 14:59:18.751183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.102  [2024-11-20T14:59:19.165Z] Copying: 512/512 [B] (average 500 kBps) 00:08:48.361 00:08:48.361 14:59:18 -- dd/posix.sh@93 -- # [[ u4y6h2o98dch0n9a6fsxf5t0zujcmte8j0q9c0zatoietcvze4f72hr8ul2lt84i7psx0v12vonkjo9f5k7hc8ojwjfw1ot247kh7faop7hnu2niz5iyeqh6j1j3wxitthb6a5pemw8vo8lj2tyb6rc47j2n30lnowihvnarlhoaovq8j6ppljatjxhau22jkjryzfrogw495vqj7vmcl9e39qdt7bxfbwe079rqiqoujwkc32skg17jl3bw67m9grv9y3yalut24keo0pmc5bpukwn2f7qt4kqij3of4jktmh4n6xgeclx79zuclozof8i2iurb18rnrgeoe07dcdcvk93ceupnt0pfeqmbsz3urxrhh5skjcexqh0e51eg7kasashue333y4w16g1eyek88ref40ooxebbxt76qg5e6db3nhd3w4tis1kbfff63xg002bbrmv0iztry3ad77xtbask87tkqbkh733gu137cg9xha3fp3sgdv81j80l == \u\4\y\6\h\2\o\9\8\d\c\h\0\n\9\a\6\f\s\x\f\5\t\0\z\u\j\c\m\t\e\8\j\0\q\9\c\0\z\a\t\o\i\e\t\c\v\z\e\4\f\7\2\h\r\8\u\l\2\l\t\8\4\i\7\p\s\x\0\v\1\2\v\o\n\k\j\o\9\f\5\k\7\h\c\8\o\j\w\j\f\w\1\o\t\2\4\7\k\h\7\f\a\o\p\7\h\n\u\2\n\i\z\5\i\y\e\q\h\6\j\1\j\3\w\x\i\t\t\h\b\6\a\5\p\e\m\w\8\v\o\8\l\j\2\t\y\b\6\r\c\4\7\j\2\n\3\0\l\n\o\w\i\h\v\n\a\r\l\h\o\a\o\v\q\8\j\6\p\p\l\j\a\t\j\x\h\a\u\2\2\j\k\j\r\y\z\f\r\o\g\w\4\9\5\v\q\j\7\v\m\c\l\9\e\3\9\q\d\t\7\b\x\f\b\w\e\0\7\9\r\q\i\q\o\u\j\w\k\c\3\2\s\k\g\1\7\j\l\3\b\w\6\7\m\9\g\r\v\9\y\3\y\a\l\u\t\2\4\k\e\o\0\p\m\c\5\b\p\u\k\w\n\2\f\7\q\t\4\k\q\i\j\3\o\f\4\j\k\t\m\h\4\n\6\x\g\e\c\l\x\7\9\z\u\c\l\o\z\o\f\8\i\2\i\u\r\b\1\8\r\n\r\g\e\o\e\0\7\d\c\d\c\v\k\9\3\c\e\u\p\n\t\0\p\f\e\q\m\b\s\z\3\u\r\x\r\h\h\5\s\k\j\c\e\x\q\h\0\e\5\1\e\g\7\k\a\s\a\s\h\u\e\3\3\3\y\4\w\1\6\g\1\e\y\e\k\8\8\r\e\f\4\0\o\o\x\e\b\b\x\t\7\6\q\g\5\e\6\d\b\3\n\h\d\3\w\4\t\i\s\1\k\b\f\f\f\6\3\x\g\0\0\2\b\b\r\m\v\0\i\z\t\r\y\3\a\d\7\7\x\t\b\a\s\k\8\7\t\k\q\b\k\h\7\3\3\g\u\1\3\7\c\g\9\x\h\a\3\f\p\3\s\g\d\v\8\1\j\8\0\l ]] 00:08:48.361 14:59:18 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:48.361 14:59:18 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:48.361 [2024-11-20 14:59:19.001225] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:48.361 [2024-11-20 14:59:19.001543] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70371 ] 00:08:48.361 [2024-11-20 14:59:19.132478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.619 [2024-11-20 14:59:19.166355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.619  [2024-11-20T14:59:19.423Z] Copying: 512/512 [B] (average 250 kBps) 00:08:48.619 00:08:48.619 14:59:19 -- dd/posix.sh@93 -- # [[ u4y6h2o98dch0n9a6fsxf5t0zujcmte8j0q9c0zatoietcvze4f72hr8ul2lt84i7psx0v12vonkjo9f5k7hc8ojwjfw1ot247kh7faop7hnu2niz5iyeqh6j1j3wxitthb6a5pemw8vo8lj2tyb6rc47j2n30lnowihvnarlhoaovq8j6ppljatjxhau22jkjryzfrogw495vqj7vmcl9e39qdt7bxfbwe079rqiqoujwkc32skg17jl3bw67m9grv9y3yalut24keo0pmc5bpukwn2f7qt4kqij3of4jktmh4n6xgeclx79zuclozof8i2iurb18rnrgeoe07dcdcvk93ceupnt0pfeqmbsz3urxrhh5skjcexqh0e51eg7kasashue333y4w16g1eyek88ref40ooxebbxt76qg5e6db3nhd3w4tis1kbfff63xg002bbrmv0iztry3ad77xtbask87tkqbkh733gu137cg9xha3fp3sgdv81j80l == \u\4\y\6\h\2\o\9\8\d\c\h\0\n\9\a\6\f\s\x\f\5\t\0\z\u\j\c\m\t\e\8\j\0\q\9\c\0\z\a\t\o\i\e\t\c\v\z\e\4\f\7\2\h\r\8\u\l\2\l\t\8\4\i\7\p\s\x\0\v\1\2\v\o\n\k\j\o\9\f\5\k\7\h\c\8\o\j\w\j\f\w\1\o\t\2\4\7\k\h\7\f\a\o\p\7\h\n\u\2\n\i\z\5\i\y\e\q\h\6\j\1\j\3\w\x\i\t\t\h\b\6\a\5\p\e\m\w\8\v\o\8\l\j\2\t\y\b\6\r\c\4\7\j\2\n\3\0\l\n\o\w\i\h\v\n\a\r\l\h\o\a\o\v\q\8\j\6\p\p\l\j\a\t\j\x\h\a\u\2\2\j\k\j\r\y\z\f\r\o\g\w\4\9\5\v\q\j\7\v\m\c\l\9\e\3\9\q\d\t\7\b\x\f\b\w\e\0\7\9\r\q\i\q\o\u\j\w\k\c\3\2\s\k\g\1\7\j\l\3\b\w\6\7\m\9\g\r\v\9\y\3\y\a\l\u\t\2\4\k\e\o\0\p\m\c\5\b\p\u\k\w\n\2\f\7\q\t\4\k\q\i\j\3\o\f\4\j\k\t\m\h\4\n\6\x\g\e\c\l\x\7\9\z\u\c\l\o\z\o\f\8\i\2\i\u\r\b\1\8\r\n\r\g\e\o\e\0\7\d\c\d\c\v\k\9\3\c\e\u\p\n\t\0\p\f\e\q\m\b\s\z\3\u\r\x\r\h\h\5\s\k\j\c\e\x\q\h\0\e\5\1\e\g\7\k\a\s\a\s\h\u\e\3\3\3\y\4\w\1\6\g\1\e\y\e\k\8\8\r\e\f\4\0\o\o\x\e\b\b\x\t\7\6\q\g\5\e\6\d\b\3\n\h\d\3\w\4\t\i\s\1\k\b\f\f\f\6\3\x\g\0\0\2\b\b\r\m\v\0\i\z\t\r\y\3\a\d\7\7\x\t\b\a\s\k\8\7\t\k\q\b\k\h\7\3\3\g\u\1\3\7\c\g\9\x\h\a\3\f\p\3\s\g\d\v\8\1\j\8\0\l ]] 00:08:48.619 14:59:19 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:48.619 14:59:19 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:48.619 [2024-11-20 14:59:19.411251] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:48.619 [2024-11-20 14:59:19.411386] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70378 ] 00:08:48.877 [2024-11-20 14:59:19.551602] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.877 [2024-11-20 14:59:19.586205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.877  [2024-11-20T14:59:19.940Z] Copying: 512/512 [B] (average 500 kBps) 00:08:49.136 00:08:49.136 14:59:19 -- dd/posix.sh@93 -- # [[ u4y6h2o98dch0n9a6fsxf5t0zujcmte8j0q9c0zatoietcvze4f72hr8ul2lt84i7psx0v12vonkjo9f5k7hc8ojwjfw1ot247kh7faop7hnu2niz5iyeqh6j1j3wxitthb6a5pemw8vo8lj2tyb6rc47j2n30lnowihvnarlhoaovq8j6ppljatjxhau22jkjryzfrogw495vqj7vmcl9e39qdt7bxfbwe079rqiqoujwkc32skg17jl3bw67m9grv9y3yalut24keo0pmc5bpukwn2f7qt4kqij3of4jktmh4n6xgeclx79zuclozof8i2iurb18rnrgeoe07dcdcvk93ceupnt0pfeqmbsz3urxrhh5skjcexqh0e51eg7kasashue333y4w16g1eyek88ref40ooxebbxt76qg5e6db3nhd3w4tis1kbfff63xg002bbrmv0iztry3ad77xtbask87tkqbkh733gu137cg9xha3fp3sgdv81j80l == \u\4\y\6\h\2\o\9\8\d\c\h\0\n\9\a\6\f\s\x\f\5\t\0\z\u\j\c\m\t\e\8\j\0\q\9\c\0\z\a\t\o\i\e\t\c\v\z\e\4\f\7\2\h\r\8\u\l\2\l\t\8\4\i\7\p\s\x\0\v\1\2\v\o\n\k\j\o\9\f\5\k\7\h\c\8\o\j\w\j\f\w\1\o\t\2\4\7\k\h\7\f\a\o\p\7\h\n\u\2\n\i\z\5\i\y\e\q\h\6\j\1\j\3\w\x\i\t\t\h\b\6\a\5\p\e\m\w\8\v\o\8\l\j\2\t\y\b\6\r\c\4\7\j\2\n\3\0\l\n\o\w\i\h\v\n\a\r\l\h\o\a\o\v\q\8\j\6\p\p\l\j\a\t\j\x\h\a\u\2\2\j\k\j\r\y\z\f\r\o\g\w\4\9\5\v\q\j\7\v\m\c\l\9\e\3\9\q\d\t\7\b\x\f\b\w\e\0\7\9\r\q\i\q\o\u\j\w\k\c\3\2\s\k\g\1\7\j\l\3\b\w\6\7\m\9\g\r\v\9\y\3\y\a\l\u\t\2\4\k\e\o\0\p\m\c\5\b\p\u\k\w\n\2\f\7\q\t\4\k\q\i\j\3\o\f\4\j\k\t\m\h\4\n\6\x\g\e\c\l\x\7\9\z\u\c\l\o\z\o\f\8\i\2\i\u\r\b\1\8\r\n\r\g\e\o\e\0\7\d\c\d\c\v\k\9\3\c\e\u\p\n\t\0\p\f\e\q\m\b\s\z\3\u\r\x\r\h\h\5\s\k\j\c\e\x\q\h\0\e\5\1\e\g\7\k\a\s\a\s\h\u\e\3\3\3\y\4\w\1\6\g\1\e\y\e\k\8\8\r\e\f\4\0\o\o\x\e\b\b\x\t\7\6\q\g\5\e\6\d\b\3\n\h\d\3\w\4\t\i\s\1\k\b\f\f\f\6\3\x\g\0\0\2\b\b\r\m\v\0\i\z\t\r\y\3\a\d\7\7\x\t\b\a\s\k\8\7\t\k\q\b\k\h\7\3\3\g\u\1\3\7\c\g\9\x\h\a\3\f\p\3\s\g\d\v\8\1\j\8\0\l ]] 00:08:49.136 14:59:19 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:49.136 14:59:19 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:49.136 14:59:19 -- dd/common.sh@98 -- # xtrace_disable 00:08:49.136 14:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:49.136 14:59:19 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:49.136 14:59:19 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:49.136 [2024-11-20 14:59:19.837725] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:49.136 [2024-11-20 14:59:19.838038] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70386 ] 00:08:49.394 [2024-11-20 14:59:19.977444] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.395 [2024-11-20 14:59:20.012389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.395  [2024-11-20T14:59:20.458Z] Copying: 512/512 [B] (average 500 kBps) 00:08:49.654 00:08:49.654 14:59:20 -- dd/posix.sh@93 -- # [[ ngh9elh77qlz9reedwfmu43yn3m9kbx2a3aly7t3t8tptftmprm5ibyywwk7qq3q0q0tyzni9c9yf4ibod4j1887bcz7t6iq63eefah0qoz19b1x8cmuvjof2f08c4d6rtb81oyaid0bym3ix3beyvd12qlvezwjfi0wf2hpvrm5419r9wetk28smcosy2xdqcv1dsoqxy1x4t0u9peuas3e2dzvp7n7prlryqs0le9xqu3tf2mwphyywul1zti21dqjxcqpl55fzi6zyaky0gbkaak66tmbu3r8f4075pq8ph1mm3h7ee5ijq69pd9aleqqi6qb42yd9jsytmw5er4gt89hwnuizyu0y0xy1267n50qvhqynz2eimsdsprh6b3yh2qevcizptwbkvymkah5ygauza5wramkbpt9bfut4y7hpnvs6bnzn33rzkaqdnzq8nrvtvor35ab3i3idfet6gsq5r2ulxqke9exlmqhp6i97r22849o4bvui8zy == \n\g\h\9\e\l\h\7\7\q\l\z\9\r\e\e\d\w\f\m\u\4\3\y\n\3\m\9\k\b\x\2\a\3\a\l\y\7\t\3\t\8\t\p\t\f\t\m\p\r\m\5\i\b\y\y\w\w\k\7\q\q\3\q\0\q\0\t\y\z\n\i\9\c\9\y\f\4\i\b\o\d\4\j\1\8\8\7\b\c\z\7\t\6\i\q\6\3\e\e\f\a\h\0\q\o\z\1\9\b\1\x\8\c\m\u\v\j\o\f\2\f\0\8\c\4\d\6\r\t\b\8\1\o\y\a\i\d\0\b\y\m\3\i\x\3\b\e\y\v\d\1\2\q\l\v\e\z\w\j\f\i\0\w\f\2\h\p\v\r\m\5\4\1\9\r\9\w\e\t\k\2\8\s\m\c\o\s\y\2\x\d\q\c\v\1\d\s\o\q\x\y\1\x\4\t\0\u\9\p\e\u\a\s\3\e\2\d\z\v\p\7\n\7\p\r\l\r\y\q\s\0\l\e\9\x\q\u\3\t\f\2\m\w\p\h\y\y\w\u\l\1\z\t\i\2\1\d\q\j\x\c\q\p\l\5\5\f\z\i\6\z\y\a\k\y\0\g\b\k\a\a\k\6\6\t\m\b\u\3\r\8\f\4\0\7\5\p\q\8\p\h\1\m\m\3\h\7\e\e\5\i\j\q\6\9\p\d\9\a\l\e\q\q\i\6\q\b\4\2\y\d\9\j\s\y\t\m\w\5\e\r\4\g\t\8\9\h\w\n\u\i\z\y\u\0\y\0\x\y\1\2\6\7\n\5\0\q\v\h\q\y\n\z\2\e\i\m\s\d\s\p\r\h\6\b\3\y\h\2\q\e\v\c\i\z\p\t\w\b\k\v\y\m\k\a\h\5\y\g\a\u\z\a\5\w\r\a\m\k\b\p\t\9\b\f\u\t\4\y\7\h\p\n\v\s\6\b\n\z\n\3\3\r\z\k\a\q\d\n\z\q\8\n\r\v\t\v\o\r\3\5\a\b\3\i\3\i\d\f\e\t\6\g\s\q\5\r\2\u\l\x\q\k\e\9\e\x\l\m\q\h\p\6\i\9\7\r\2\2\8\4\9\o\4\b\v\u\i\8\z\y ]] 00:08:49.654 14:59:20 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:49.654 14:59:20 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:49.654 [2024-11-20 14:59:20.244946] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:49.654 [2024-11-20 14:59:20.245042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70393 ] 00:08:49.654 [2024-11-20 14:59:20.376273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.654 [2024-11-20 14:59:20.410429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.654  [2024-11-20T14:59:20.718Z] Copying: 512/512 [B] (average 500 kBps) 00:08:49.914 00:08:49.914 14:59:20 -- dd/posix.sh@93 -- # [[ ngh9elh77qlz9reedwfmu43yn3m9kbx2a3aly7t3t8tptftmprm5ibyywwk7qq3q0q0tyzni9c9yf4ibod4j1887bcz7t6iq63eefah0qoz19b1x8cmuvjof2f08c4d6rtb81oyaid0bym3ix3beyvd12qlvezwjfi0wf2hpvrm5419r9wetk28smcosy2xdqcv1dsoqxy1x4t0u9peuas3e2dzvp7n7prlryqs0le9xqu3tf2mwphyywul1zti21dqjxcqpl55fzi6zyaky0gbkaak66tmbu3r8f4075pq8ph1mm3h7ee5ijq69pd9aleqqi6qb42yd9jsytmw5er4gt89hwnuizyu0y0xy1267n50qvhqynz2eimsdsprh6b3yh2qevcizptwbkvymkah5ygauza5wramkbpt9bfut4y7hpnvs6bnzn33rzkaqdnzq8nrvtvor35ab3i3idfet6gsq5r2ulxqke9exlmqhp6i97r22849o4bvui8zy == \n\g\h\9\e\l\h\7\7\q\l\z\9\r\e\e\d\w\f\m\u\4\3\y\n\3\m\9\k\b\x\2\a\3\a\l\y\7\t\3\t\8\t\p\t\f\t\m\p\r\m\5\i\b\y\y\w\w\k\7\q\q\3\q\0\q\0\t\y\z\n\i\9\c\9\y\f\4\i\b\o\d\4\j\1\8\8\7\b\c\z\7\t\6\i\q\6\3\e\e\f\a\h\0\q\o\z\1\9\b\1\x\8\c\m\u\v\j\o\f\2\f\0\8\c\4\d\6\r\t\b\8\1\o\y\a\i\d\0\b\y\m\3\i\x\3\b\e\y\v\d\1\2\q\l\v\e\z\w\j\f\i\0\w\f\2\h\p\v\r\m\5\4\1\9\r\9\w\e\t\k\2\8\s\m\c\o\s\y\2\x\d\q\c\v\1\d\s\o\q\x\y\1\x\4\t\0\u\9\p\e\u\a\s\3\e\2\d\z\v\p\7\n\7\p\r\l\r\y\q\s\0\l\e\9\x\q\u\3\t\f\2\m\w\p\h\y\y\w\u\l\1\z\t\i\2\1\d\q\j\x\c\q\p\l\5\5\f\z\i\6\z\y\a\k\y\0\g\b\k\a\a\k\6\6\t\m\b\u\3\r\8\f\4\0\7\5\p\q\8\p\h\1\m\m\3\h\7\e\e\5\i\j\q\6\9\p\d\9\a\l\e\q\q\i\6\q\b\4\2\y\d\9\j\s\y\t\m\w\5\e\r\4\g\t\8\9\h\w\n\u\i\z\y\u\0\y\0\x\y\1\2\6\7\n\5\0\q\v\h\q\y\n\z\2\e\i\m\s\d\s\p\r\h\6\b\3\y\h\2\q\e\v\c\i\z\p\t\w\b\k\v\y\m\k\a\h\5\y\g\a\u\z\a\5\w\r\a\m\k\b\p\t\9\b\f\u\t\4\y\7\h\p\n\v\s\6\b\n\z\n\3\3\r\z\k\a\q\d\n\z\q\8\n\r\v\t\v\o\r\3\5\a\b\3\i\3\i\d\f\e\t\6\g\s\q\5\r\2\u\l\x\q\k\e\9\e\x\l\m\q\h\p\6\i\9\7\r\2\2\8\4\9\o\4\b\v\u\i\8\z\y ]] 00:08:49.914 14:59:20 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:49.914 14:59:20 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:49.914 [2024-11-20 14:59:20.639966] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:49.914 [2024-11-20 14:59:20.640212] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70401 ] 00:08:50.172 [2024-11-20 14:59:20.775929] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.172 [2024-11-20 14:59:20.809495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.172  [2024-11-20T14:59:21.234Z] Copying: 512/512 [B] (average 500 kBps) 00:08:50.430 00:08:50.430 14:59:21 -- dd/posix.sh@93 -- # [[ ngh9elh77qlz9reedwfmu43yn3m9kbx2a3aly7t3t8tptftmprm5ibyywwk7qq3q0q0tyzni9c9yf4ibod4j1887bcz7t6iq63eefah0qoz19b1x8cmuvjof2f08c4d6rtb81oyaid0bym3ix3beyvd12qlvezwjfi0wf2hpvrm5419r9wetk28smcosy2xdqcv1dsoqxy1x4t0u9peuas3e2dzvp7n7prlryqs0le9xqu3tf2mwphyywul1zti21dqjxcqpl55fzi6zyaky0gbkaak66tmbu3r8f4075pq8ph1mm3h7ee5ijq69pd9aleqqi6qb42yd9jsytmw5er4gt89hwnuizyu0y0xy1267n50qvhqynz2eimsdsprh6b3yh2qevcizptwbkvymkah5ygauza5wramkbpt9bfut4y7hpnvs6bnzn33rzkaqdnzq8nrvtvor35ab3i3idfet6gsq5r2ulxqke9exlmqhp6i97r22849o4bvui8zy == \n\g\h\9\e\l\h\7\7\q\l\z\9\r\e\e\d\w\f\m\u\4\3\y\n\3\m\9\k\b\x\2\a\3\a\l\y\7\t\3\t\8\t\p\t\f\t\m\p\r\m\5\i\b\y\y\w\w\k\7\q\q\3\q\0\q\0\t\y\z\n\i\9\c\9\y\f\4\i\b\o\d\4\j\1\8\8\7\b\c\z\7\t\6\i\q\6\3\e\e\f\a\h\0\q\o\z\1\9\b\1\x\8\c\m\u\v\j\o\f\2\f\0\8\c\4\d\6\r\t\b\8\1\o\y\a\i\d\0\b\y\m\3\i\x\3\b\e\y\v\d\1\2\q\l\v\e\z\w\j\f\i\0\w\f\2\h\p\v\r\m\5\4\1\9\r\9\w\e\t\k\2\8\s\m\c\o\s\y\2\x\d\q\c\v\1\d\s\o\q\x\y\1\x\4\t\0\u\9\p\e\u\a\s\3\e\2\d\z\v\p\7\n\7\p\r\l\r\y\q\s\0\l\e\9\x\q\u\3\t\f\2\m\w\p\h\y\y\w\u\l\1\z\t\i\2\1\d\q\j\x\c\q\p\l\5\5\f\z\i\6\z\y\a\k\y\0\g\b\k\a\a\k\6\6\t\m\b\u\3\r\8\f\4\0\7\5\p\q\8\p\h\1\m\m\3\h\7\e\e\5\i\j\q\6\9\p\d\9\a\l\e\q\q\i\6\q\b\4\2\y\d\9\j\s\y\t\m\w\5\e\r\4\g\t\8\9\h\w\n\u\i\z\y\u\0\y\0\x\y\1\2\6\7\n\5\0\q\v\h\q\y\n\z\2\e\i\m\s\d\s\p\r\h\6\b\3\y\h\2\q\e\v\c\i\z\p\t\w\b\k\v\y\m\k\a\h\5\y\g\a\u\z\a\5\w\r\a\m\k\b\p\t\9\b\f\u\t\4\y\7\h\p\n\v\s\6\b\n\z\n\3\3\r\z\k\a\q\d\n\z\q\8\n\r\v\t\v\o\r\3\5\a\b\3\i\3\i\d\f\e\t\6\g\s\q\5\r\2\u\l\x\q\k\e\9\e\x\l\m\q\h\p\6\i\9\7\r\2\2\8\4\9\o\4\b\v\u\i\8\z\y ]] 00:08:50.430 14:59:21 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:50.430 14:59:21 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:50.430 [2024-11-20 14:59:21.049429] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:50.430 [2024-11-20 14:59:21.049540] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70403 ] 00:08:50.430 [2024-11-20 14:59:21.186831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.430 [2024-11-20 14:59:21.227390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.688  [2024-11-20T14:59:21.492Z] Copying: 512/512 [B] (average 500 kBps) 00:08:50.688 00:08:50.689 14:59:21 -- dd/posix.sh@93 -- # [[ ngh9elh77qlz9reedwfmu43yn3m9kbx2a3aly7t3t8tptftmprm5ibyywwk7qq3q0q0tyzni9c9yf4ibod4j1887bcz7t6iq63eefah0qoz19b1x8cmuvjof2f08c4d6rtb81oyaid0bym3ix3beyvd12qlvezwjfi0wf2hpvrm5419r9wetk28smcosy2xdqcv1dsoqxy1x4t0u9peuas3e2dzvp7n7prlryqs0le9xqu3tf2mwphyywul1zti21dqjxcqpl55fzi6zyaky0gbkaak66tmbu3r8f4075pq8ph1mm3h7ee5ijq69pd9aleqqi6qb42yd9jsytmw5er4gt89hwnuizyu0y0xy1267n50qvhqynz2eimsdsprh6b3yh2qevcizptwbkvymkah5ygauza5wramkbpt9bfut4y7hpnvs6bnzn33rzkaqdnzq8nrvtvor35ab3i3idfet6gsq5r2ulxqke9exlmqhp6i97r22849o4bvui8zy == \n\g\h\9\e\l\h\7\7\q\l\z\9\r\e\e\d\w\f\m\u\4\3\y\n\3\m\9\k\b\x\2\a\3\a\l\y\7\t\3\t\8\t\p\t\f\t\m\p\r\m\5\i\b\y\y\w\w\k\7\q\q\3\q\0\q\0\t\y\z\n\i\9\c\9\y\f\4\i\b\o\d\4\j\1\8\8\7\b\c\z\7\t\6\i\q\6\3\e\e\f\a\h\0\q\o\z\1\9\b\1\x\8\c\m\u\v\j\o\f\2\f\0\8\c\4\d\6\r\t\b\8\1\o\y\a\i\d\0\b\y\m\3\i\x\3\b\e\y\v\d\1\2\q\l\v\e\z\w\j\f\i\0\w\f\2\h\p\v\r\m\5\4\1\9\r\9\w\e\t\k\2\8\s\m\c\o\s\y\2\x\d\q\c\v\1\d\s\o\q\x\y\1\x\4\t\0\u\9\p\e\u\a\s\3\e\2\d\z\v\p\7\n\7\p\r\l\r\y\q\s\0\l\e\9\x\q\u\3\t\f\2\m\w\p\h\y\y\w\u\l\1\z\t\i\2\1\d\q\j\x\c\q\p\l\5\5\f\z\i\6\z\y\a\k\y\0\g\b\k\a\a\k\6\6\t\m\b\u\3\r\8\f\4\0\7\5\p\q\8\p\h\1\m\m\3\h\7\e\e\5\i\j\q\6\9\p\d\9\a\l\e\q\q\i\6\q\b\4\2\y\d\9\j\s\y\t\m\w\5\e\r\4\g\t\8\9\h\w\n\u\i\z\y\u\0\y\0\x\y\1\2\6\7\n\5\0\q\v\h\q\y\n\z\2\e\i\m\s\d\s\p\r\h\6\b\3\y\h\2\q\e\v\c\i\z\p\t\w\b\k\v\y\m\k\a\h\5\y\g\a\u\z\a\5\w\r\a\m\k\b\p\t\9\b\f\u\t\4\y\7\h\p\n\v\s\6\b\n\z\n\3\3\r\z\k\a\q\d\n\z\q\8\n\r\v\t\v\o\r\3\5\a\b\3\i\3\i\d\f\e\t\6\g\s\q\5\r\2\u\l\x\q\k\e\9\e\x\l\m\q\h\p\6\i\9\7\r\2\2\8\4\9\o\4\b\v\u\i\8\z\y ]] 00:08:50.689 00:08:50.689 real 0m3.359s 00:08:50.689 user 0m1.632s 00:08:50.689 sys 0m0.748s 00:08:50.689 14:59:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:50.689 ************************************ 00:08:50.689 END TEST dd_flags_misc_forced_aio 00:08:50.689 ************************************ 00:08:50.689 14:59:21 -- common/autotest_common.sh@10 -- # set +x 00:08:50.689 14:59:21 -- dd/posix.sh@1 -- # cleanup 00:08:50.689 14:59:21 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:50.689 14:59:21 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:50.689 ************************************ 00:08:50.689 END TEST spdk_dd_posix 00:08:50.689 ************************************ 00:08:50.689 00:08:50.689 real 0m16.384s 00:08:50.689 user 0m7.011s 00:08:50.689 sys 0m3.543s 00:08:50.689 14:59:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:50.689 14:59:21 -- common/autotest_common.sh@10 -- # set +x 00:08:50.947 14:59:21 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:50.947 14:59:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:50.947 14:59:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:50.947 14:59:21 -- common/autotest_common.sh@10 -- # set +x 00:08:50.947 ************************************ 00:08:50.947 START TEST spdk_dd_malloc 00:08:50.947 ************************************ 00:08:50.947 14:59:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:50.947 * Looking for test storage... 00:08:50.947 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:50.947 14:59:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:50.947 14:59:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:50.947 14:59:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:50.947 14:59:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:50.947 14:59:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:50.947 14:59:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:50.947 14:59:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:50.947 14:59:21 -- scripts/common.sh@335 -- # IFS=.-: 00:08:50.947 14:59:21 -- scripts/common.sh@335 -- # read -ra ver1 00:08:50.947 14:59:21 -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.947 14:59:21 -- scripts/common.sh@336 -- # read -ra ver2 00:08:50.947 14:59:21 -- scripts/common.sh@337 -- # local 'op=<' 00:08:50.947 14:59:21 -- scripts/common.sh@339 -- # ver1_l=2 00:08:50.947 14:59:21 -- scripts/common.sh@340 -- # ver2_l=1 00:08:50.947 14:59:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:50.947 14:59:21 -- scripts/common.sh@343 -- # case "$op" in 00:08:50.947 14:59:21 -- scripts/common.sh@344 -- # : 1 00:08:50.947 14:59:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:50.947 14:59:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.947 14:59:21 -- scripts/common.sh@364 -- # decimal 1 00:08:50.947 14:59:21 -- scripts/common.sh@352 -- # local d=1 00:08:50.947 14:59:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.947 14:59:21 -- scripts/common.sh@354 -- # echo 1 00:08:50.947 14:59:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:50.947 14:59:21 -- scripts/common.sh@365 -- # decimal 2 00:08:50.947 14:59:21 -- scripts/common.sh@352 -- # local d=2 00:08:50.947 14:59:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.947 14:59:21 -- scripts/common.sh@354 -- # echo 2 00:08:50.947 14:59:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:50.947 14:59:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:50.947 14:59:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:50.947 14:59:21 -- scripts/common.sh@367 -- # return 0 00:08:50.947 14:59:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.947 14:59:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:50.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.947 --rc genhtml_branch_coverage=1 00:08:50.947 --rc genhtml_function_coverage=1 00:08:50.947 --rc genhtml_legend=1 00:08:50.947 --rc geninfo_all_blocks=1 00:08:50.947 --rc geninfo_unexecuted_blocks=1 00:08:50.947 00:08:50.947 ' 00:08:50.947 14:59:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:50.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.947 --rc genhtml_branch_coverage=1 00:08:50.947 --rc genhtml_function_coverage=1 00:08:50.947 --rc genhtml_legend=1 00:08:50.947 --rc geninfo_all_blocks=1 00:08:50.947 --rc geninfo_unexecuted_blocks=1 00:08:50.947 00:08:50.947 ' 00:08:50.947 14:59:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:50.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.947 --rc genhtml_branch_coverage=1 00:08:50.947 --rc genhtml_function_coverage=1 00:08:50.947 --rc genhtml_legend=1 00:08:50.947 --rc geninfo_all_blocks=1 00:08:50.947 --rc geninfo_unexecuted_blocks=1 00:08:50.947 00:08:50.947 ' 00:08:50.947 14:59:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:50.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.947 --rc genhtml_branch_coverage=1 00:08:50.947 --rc genhtml_function_coverage=1 00:08:50.947 --rc genhtml_legend=1 00:08:50.947 --rc geninfo_all_blocks=1 00:08:50.947 --rc geninfo_unexecuted_blocks=1 00:08:50.947 00:08:50.947 ' 00:08:50.947 14:59:21 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:50.947 14:59:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.947 14:59:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.947 14:59:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.947 14:59:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.948 14:59:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.948 14:59:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.948 14:59:21 -- paths/export.sh@5 -- # export PATH 00:08:50.948 14:59:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.948 14:59:21 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:50.948 14:59:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:50.948 14:59:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:50.948 14:59:21 -- common/autotest_common.sh@10 -- # set +x 00:08:50.948 ************************************ 00:08:50.948 START TEST dd_malloc_copy 00:08:50.948 ************************************ 00:08:50.948 14:59:21 -- common/autotest_common.sh@1114 -- # malloc_copy 00:08:50.948 14:59:21 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:50.948 14:59:21 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:50.948 14:59:21 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:50.948 14:59:21 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:50.948 14:59:21 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:50.948 14:59:21 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:50.948 14:59:21 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:50.948 14:59:21 -- dd/malloc.sh@28 -- # gen_conf 00:08:50.948 14:59:21 -- dd/common.sh@31 -- # xtrace_disable 00:08:50.948 14:59:21 -- common/autotest_common.sh@10 -- # set +x 00:08:51.232 [2024-11-20 14:59:21.785132] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:51.232 [2024-11-20 14:59:21.785442] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70484 ] 00:08:51.232 { 00:08:51.232 "subsystems": [ 00:08:51.232 { 00:08:51.233 "subsystem": "bdev", 00:08:51.233 "config": [ 00:08:51.233 { 00:08:51.233 "params": { 00:08:51.233 "block_size": 512, 00:08:51.233 "num_blocks": 1048576, 00:08:51.233 "name": "malloc0" 00:08:51.233 }, 00:08:51.233 "method": "bdev_malloc_create" 00:08:51.233 }, 00:08:51.233 { 00:08:51.233 "params": { 00:08:51.233 "block_size": 512, 00:08:51.233 "num_blocks": 1048576, 00:08:51.233 "name": "malloc1" 00:08:51.233 }, 00:08:51.233 "method": "bdev_malloc_create" 00:08:51.233 }, 00:08:51.233 { 00:08:51.233 "method": "bdev_wait_for_examine" 00:08:51.233 } 00:08:51.233 ] 00:08:51.233 } 00:08:51.233 ] 00:08:51.233 } 00:08:51.233 [2024-11-20 14:59:21.924547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.233 [2024-11-20 14:59:21.959035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.608  [2024-11-20T14:59:24.349Z] Copying: 200/512 [MB] (200 MBps) [2024-11-20T14:59:24.917Z] Copying: 401/512 [MB] (200 MBps) [2024-11-20T14:59:25.176Z] Copying: 512/512 [MB] (average 200 MBps) 00:08:54.372 00:08:54.372 14:59:25 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:54.372 14:59:25 -- dd/malloc.sh@33 -- # gen_conf 00:08:54.372 14:59:25 -- dd/common.sh@31 -- # xtrace_disable 00:08:54.372 14:59:25 -- common/autotest_common.sh@10 -- # set +x 00:08:54.372 [2024-11-20 14:59:25.106936] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:54.372 [2024-11-20 14:59:25.107079] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70526 ] 00:08:54.372 { 00:08:54.372 "subsystems": [ 00:08:54.372 { 00:08:54.372 "subsystem": "bdev", 00:08:54.372 "config": [ 00:08:54.372 { 00:08:54.372 "params": { 00:08:54.372 "block_size": 512, 00:08:54.372 "num_blocks": 1048576, 00:08:54.372 "name": "malloc0" 00:08:54.372 }, 00:08:54.372 "method": "bdev_malloc_create" 00:08:54.372 }, 00:08:54.372 { 00:08:54.372 "params": { 00:08:54.372 "block_size": 512, 00:08:54.372 "num_blocks": 1048576, 00:08:54.372 "name": "malloc1" 00:08:54.372 }, 00:08:54.372 "method": "bdev_malloc_create" 00:08:54.372 }, 00:08:54.372 { 00:08:54.372 "method": "bdev_wait_for_examine" 00:08:54.372 } 00:08:54.372 ] 00:08:54.372 } 00:08:54.372 ] 00:08:54.372 } 00:08:54.630 [2024-11-20 14:59:25.253860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.630 [2024-11-20 14:59:25.287917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.028  [2024-11-20T14:59:27.767Z] Copying: 195/512 [MB] (195 MBps) [2024-11-20T14:59:28.333Z] Copying: 394/512 [MB] (198 MBps) [2024-11-20T14:59:28.591Z] Copying: 512/512 [MB] (average 196 MBps) 00:08:57.787 00:08:57.787 00:08:57.787 real 0m6.699s 00:08:57.787 user 0m6.058s 00:08:57.787 sys 0m0.502s 00:08:57.787 14:59:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:57.787 14:59:28 -- common/autotest_common.sh@10 -- # set +x 00:08:57.787 ************************************ 00:08:57.787 END TEST dd_malloc_copy 00:08:57.787 ************************************ 00:08:57.787 ************************************ 00:08:57.787 END TEST spdk_dd_malloc 00:08:57.787 ************************************ 00:08:57.787 00:08:57.787 real 0m6.939s 00:08:57.787 user 0m6.206s 00:08:57.787 sys 0m0.597s 00:08:57.787 14:59:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:57.787 14:59:28 -- common/autotest_common.sh@10 -- # set +x 00:08:57.787 14:59:28 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:08:57.787 14:59:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:57.787 14:59:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:57.787 14:59:28 -- common/autotest_common.sh@10 -- # set +x 00:08:57.787 ************************************ 00:08:57.787 START TEST spdk_dd_bdev_to_bdev 00:08:57.787 ************************************ 00:08:57.787 14:59:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:08:57.787 * Looking for test storage... 00:08:57.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:57.787 14:59:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:57.787 14:59:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:57.787 14:59:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:58.047 14:59:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:58.047 14:59:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:58.047 14:59:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:58.047 14:59:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:58.047 14:59:28 -- scripts/common.sh@335 -- # IFS=.-: 00:08:58.047 14:59:28 -- scripts/common.sh@335 -- # read -ra ver1 00:08:58.047 14:59:28 -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.047 14:59:28 -- scripts/common.sh@336 -- # read -ra ver2 00:08:58.047 14:59:28 -- scripts/common.sh@337 -- # local 'op=<' 00:08:58.047 14:59:28 -- scripts/common.sh@339 -- # ver1_l=2 00:08:58.047 14:59:28 -- scripts/common.sh@340 -- # ver2_l=1 00:08:58.047 14:59:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:58.047 14:59:28 -- scripts/common.sh@343 -- # case "$op" in 00:08:58.047 14:59:28 -- scripts/common.sh@344 -- # : 1 00:08:58.047 14:59:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:58.047 14:59:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.047 14:59:28 -- scripts/common.sh@364 -- # decimal 1 00:08:58.047 14:59:28 -- scripts/common.sh@352 -- # local d=1 00:08:58.047 14:59:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.047 14:59:28 -- scripts/common.sh@354 -- # echo 1 00:08:58.047 14:59:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:58.047 14:59:28 -- scripts/common.sh@365 -- # decimal 2 00:08:58.047 14:59:28 -- scripts/common.sh@352 -- # local d=2 00:08:58.047 14:59:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.047 14:59:28 -- scripts/common.sh@354 -- # echo 2 00:08:58.047 14:59:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:58.047 14:59:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:58.047 14:59:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:58.047 14:59:28 -- scripts/common.sh@367 -- # return 0 00:08:58.047 14:59:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.047 14:59:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:58.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.047 --rc genhtml_branch_coverage=1 00:08:58.047 --rc genhtml_function_coverage=1 00:08:58.047 --rc genhtml_legend=1 00:08:58.047 --rc geninfo_all_blocks=1 00:08:58.047 --rc geninfo_unexecuted_blocks=1 00:08:58.047 00:08:58.047 ' 00:08:58.047 14:59:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:58.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.047 --rc genhtml_branch_coverage=1 00:08:58.047 --rc genhtml_function_coverage=1 00:08:58.047 --rc genhtml_legend=1 00:08:58.047 --rc geninfo_all_blocks=1 00:08:58.047 --rc geninfo_unexecuted_blocks=1 00:08:58.047 00:08:58.047 ' 00:08:58.047 14:59:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:58.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.047 --rc genhtml_branch_coverage=1 00:08:58.047 --rc genhtml_function_coverage=1 00:08:58.047 --rc genhtml_legend=1 00:08:58.047 --rc geninfo_all_blocks=1 00:08:58.047 --rc geninfo_unexecuted_blocks=1 00:08:58.047 00:08:58.047 ' 00:08:58.047 14:59:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:58.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.047 --rc genhtml_branch_coverage=1 00:08:58.047 --rc genhtml_function_coverage=1 00:08:58.047 --rc genhtml_legend=1 00:08:58.047 --rc geninfo_all_blocks=1 00:08:58.047 --rc geninfo_unexecuted_blocks=1 00:08:58.047 00:08:58.047 ' 00:08:58.047 14:59:28 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:58.047 14:59:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.047 14:59:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.047 14:59:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.047 14:59:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.047 14:59:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.047 14:59:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.047 14:59:28 -- paths/export.sh@5 -- # export PATH 00:08:58.047 14:59:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.047 14:59:28 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:58.047 14:59:28 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:58.047 14:59:28 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:58.047 14:59:28 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:58.047 14:59:28 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:58.047 14:59:28 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:58.047 14:59:28 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:06.0 00:08:58.047 14:59:28 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:58.047 14:59:28 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:58.047 14:59:28 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:07.0 00:08:58.047 14:59:28 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:08:58.047 14:59:28 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:58.047 14:59:28 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:07.0' ['trtype']='pcie') 00:08:58.048 14:59:28 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:58.048 14:59:28 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:58.048 14:59:28 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:58.048 14:59:28 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:58.048 14:59:28 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:58.048 14:59:28 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:58.048 14:59:28 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:58.048 14:59:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:58.048 14:59:28 -- common/autotest_common.sh@10 -- # set +x 00:08:58.048 ************************************ 00:08:58.048 START TEST dd_inflate_file 00:08:58.048 ************************************ 00:08:58.048 14:59:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:58.048 [2024-11-20 14:59:28.742839] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:58.048 [2024-11-20 14:59:28.743762] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70636 ] 00:08:58.306 [2024-11-20 14:59:28.885684] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.306 [2024-11-20 14:59:28.922630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.306  [2024-11-20T14:59:29.369Z] Copying: 64/64 [MB] (average 1828 MBps) 00:08:58.565 00:08:58.565 00:08:58.565 real 0m0.471s 00:08:58.565 user 0m0.219s 00:08:58.565 sys 0m0.134s 00:08:58.565 14:59:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:58.565 ************************************ 00:08:58.565 END TEST dd_inflate_file 00:08:58.565 ************************************ 00:08:58.565 14:59:29 -- common/autotest_common.sh@10 -- # set +x 00:08:58.565 14:59:29 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:58.565 14:59:29 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:58.565 14:59:29 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:58.565 14:59:29 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:58.565 14:59:29 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:58.565 14:59:29 -- dd/common.sh@31 -- # xtrace_disable 00:08:58.565 14:59:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:58.565 14:59:29 -- common/autotest_common.sh@10 -- # set +x 00:08:58.565 14:59:29 -- common/autotest_common.sh@10 -- # set +x 00:08:58.565 ************************************ 00:08:58.565 START TEST dd_copy_to_out_bdev 00:08:58.565 ************************************ 00:08:58.565 14:59:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:58.565 { 00:08:58.565 "subsystems": [ 00:08:58.565 { 00:08:58.565 "subsystem": "bdev", 00:08:58.565 "config": [ 00:08:58.565 { 00:08:58.565 "params": { 00:08:58.565 "trtype": "pcie", 00:08:58.565 "traddr": "0000:00:06.0", 00:08:58.565 "name": "Nvme0" 00:08:58.565 }, 00:08:58.565 "method": "bdev_nvme_attach_controller" 00:08:58.565 }, 00:08:58.565 { 00:08:58.566 "params": { 00:08:58.566 "trtype": "pcie", 00:08:58.566 "traddr": "0000:00:07.0", 00:08:58.566 "name": "Nvme1" 00:08:58.566 }, 00:08:58.566 "method": "bdev_nvme_attach_controller" 00:08:58.566 }, 00:08:58.566 { 00:08:58.566 "method": "bdev_wait_for_examine" 00:08:58.566 } 00:08:58.566 ] 00:08:58.566 } 00:08:58.566 ] 00:08:58.566 } 00:08:58.566 [2024-11-20 14:59:29.267184] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:58.566 [2024-11-20 14:59:29.267480] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70669 ] 00:08:58.824 [2024-11-20 14:59:29.405470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.824 [2024-11-20 14:59:29.444789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.201  [2024-11-20T14:59:31.005Z] Copying: 62/64 [MB] (62 MBps) [2024-11-20T14:59:31.005Z] Copying: 64/64 [MB] (average 62 MBps) 00:09:00.201 00:09:00.201 ************************************ 00:09:00.201 END TEST dd_copy_to_out_bdev 00:09:00.201 ************************************ 00:09:00.201 00:09:00.201 real 0m1.644s 00:09:00.201 user 0m1.405s 00:09:00.201 sys 0m0.168s 00:09:00.201 14:59:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:00.201 14:59:30 -- common/autotest_common.sh@10 -- # set +x 00:09:00.201 14:59:30 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:09:00.201 14:59:30 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:09:00.201 14:59:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:00.201 14:59:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:00.201 14:59:30 -- common/autotest_common.sh@10 -- # set +x 00:09:00.201 ************************************ 00:09:00.201 START TEST dd_offset_magic 00:09:00.201 ************************************ 00:09:00.201 14:59:30 -- common/autotest_common.sh@1114 -- # offset_magic 00:09:00.201 14:59:30 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:09:00.201 14:59:30 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:09:00.201 14:59:30 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:09:00.201 14:59:30 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:00.201 14:59:30 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:09:00.201 14:59:30 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:00.201 14:59:30 -- dd/common.sh@31 -- # xtrace_disable 00:09:00.201 14:59:30 -- common/autotest_common.sh@10 -- # set +x 00:09:00.201 [2024-11-20 14:59:30.950032] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:00.201 [2024-11-20 14:59:30.950837] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70713 ] 00:09:00.201 { 00:09:00.201 "subsystems": [ 00:09:00.201 { 00:09:00.201 "subsystem": "bdev", 00:09:00.201 "config": [ 00:09:00.201 { 00:09:00.201 "params": { 00:09:00.201 "trtype": "pcie", 00:09:00.201 "traddr": "0000:00:06.0", 00:09:00.201 "name": "Nvme0" 00:09:00.201 }, 00:09:00.201 "method": "bdev_nvme_attach_controller" 00:09:00.201 }, 00:09:00.201 { 00:09:00.201 "params": { 00:09:00.201 "trtype": "pcie", 00:09:00.201 "traddr": "0000:00:07.0", 00:09:00.201 "name": "Nvme1" 00:09:00.201 }, 00:09:00.201 "method": "bdev_nvme_attach_controller" 00:09:00.201 }, 00:09:00.201 { 00:09:00.201 "method": "bdev_wait_for_examine" 00:09:00.201 } 00:09:00.201 ] 00:09:00.201 } 00:09:00.201 ] 00:09:00.201 } 00:09:00.460 [2024-11-20 14:59:31.095481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.460 [2024-11-20 14:59:31.129792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.720  [2024-11-20T14:59:31.783Z] Copying: 65/65 [MB] (average 1274 MBps) 00:09:00.979 00:09:00.979 14:59:31 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:00.979 14:59:31 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:09:00.979 14:59:31 -- dd/common.sh@31 -- # xtrace_disable 00:09:00.979 14:59:31 -- common/autotest_common.sh@10 -- # set +x 00:09:00.979 [2024-11-20 14:59:31.590855] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:00.979 [2024-11-20 14:59:31.591613] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70722 ] 00:09:00.979 { 00:09:00.979 "subsystems": [ 00:09:00.979 { 00:09:00.979 "subsystem": "bdev", 00:09:00.979 "config": [ 00:09:00.979 { 00:09:00.979 "params": { 00:09:00.979 "trtype": "pcie", 00:09:00.979 "traddr": "0000:00:06.0", 00:09:00.979 "name": "Nvme0" 00:09:00.979 }, 00:09:00.979 "method": "bdev_nvme_attach_controller" 00:09:00.979 }, 00:09:00.979 { 00:09:00.979 "params": { 00:09:00.979 "trtype": "pcie", 00:09:00.979 "traddr": "0000:00:07.0", 00:09:00.979 "name": "Nvme1" 00:09:00.979 }, 00:09:00.979 "method": "bdev_nvme_attach_controller" 00:09:00.979 }, 00:09:00.979 { 00:09:00.979 "method": "bdev_wait_for_examine" 00:09:00.979 } 00:09:00.979 ] 00:09:00.979 } 00:09:00.979 ] 00:09:00.979 } 00:09:00.979 [2024-11-20 14:59:31.728303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.979 [2024-11-20 14:59:31.777366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.238  [2024-11-20T14:59:32.300Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:09:01.496 00:09:01.496 14:59:32 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:01.496 14:59:32 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:01.496 14:59:32 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:01.497 14:59:32 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:09:01.497 14:59:32 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:01.497 14:59:32 -- dd/common.sh@31 -- # xtrace_disable 00:09:01.497 14:59:32 -- common/autotest_common.sh@10 -- # set +x 00:09:01.497 [2024-11-20 14:59:32.224596] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:01.497 [2024-11-20 14:59:32.224742] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70742 ] 00:09:01.497 { 00:09:01.497 "subsystems": [ 00:09:01.497 { 00:09:01.497 "subsystem": "bdev", 00:09:01.497 "config": [ 00:09:01.497 { 00:09:01.497 "params": { 00:09:01.497 "trtype": "pcie", 00:09:01.497 "traddr": "0000:00:06.0", 00:09:01.497 "name": "Nvme0" 00:09:01.497 }, 00:09:01.497 "method": "bdev_nvme_attach_controller" 00:09:01.497 }, 00:09:01.497 { 00:09:01.497 "params": { 00:09:01.497 "trtype": "pcie", 00:09:01.497 "traddr": "0000:00:07.0", 00:09:01.497 "name": "Nvme1" 00:09:01.497 }, 00:09:01.497 "method": "bdev_nvme_attach_controller" 00:09:01.497 }, 00:09:01.497 { 00:09:01.497 "method": "bdev_wait_for_examine" 00:09:01.497 } 00:09:01.497 ] 00:09:01.497 } 00:09:01.497 ] 00:09:01.497 } 00:09:01.754 [2024-11-20 14:59:32.365517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.754 [2024-11-20 14:59:32.405136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.012  [2024-11-20T14:59:32.816Z] Copying: 65/65 [MB] (average 1477 MBps) 00:09:02.012 00:09:02.271 14:59:32 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:09:02.271 14:59:32 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:02.271 14:59:32 -- dd/common.sh@31 -- # xtrace_disable 00:09:02.271 14:59:32 -- common/autotest_common.sh@10 -- # set +x 00:09:02.271 [2024-11-20 14:59:32.869374] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:02.271 [2024-11-20 14:59:32.869900] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70757 ] 00:09:02.271 { 00:09:02.271 "subsystems": [ 00:09:02.271 { 00:09:02.271 "subsystem": "bdev", 00:09:02.271 "config": [ 00:09:02.271 { 00:09:02.271 "params": { 00:09:02.271 "trtype": "pcie", 00:09:02.271 "traddr": "0000:00:06.0", 00:09:02.271 "name": "Nvme0" 00:09:02.271 }, 00:09:02.271 "method": "bdev_nvme_attach_controller" 00:09:02.271 }, 00:09:02.271 { 00:09:02.271 "params": { 00:09:02.271 "trtype": "pcie", 00:09:02.271 "traddr": "0000:00:07.0", 00:09:02.271 "name": "Nvme1" 00:09:02.271 }, 00:09:02.271 "method": "bdev_nvme_attach_controller" 00:09:02.271 }, 00:09:02.271 { 00:09:02.271 "method": "bdev_wait_for_examine" 00:09:02.271 } 00:09:02.271 ] 00:09:02.271 } 00:09:02.271 ] 00:09:02.271 } 00:09:02.271 [2024-11-20 14:59:33.007896] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.271 [2024-11-20 14:59:33.055769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.529  [2024-11-20T14:59:33.604Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:02.800 00:09:02.800 ************************************ 00:09:02.800 END TEST dd_offset_magic 00:09:02.800 ************************************ 00:09:02.800 14:59:33 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:02.800 14:59:33 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:02.800 00:09:02.800 real 0m2.515s 00:09:02.800 user 0m1.763s 00:09:02.800 sys 0m0.548s 00:09:02.800 14:59:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.800 14:59:33 -- common/autotest_common.sh@10 -- # set +x 00:09:02.800 14:59:33 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:09:02.800 14:59:33 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:09:02.800 14:59:33 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:02.800 14:59:33 -- dd/common.sh@11 -- # local nvme_ref= 00:09:02.800 14:59:33 -- dd/common.sh@12 -- # local size=4194330 00:09:02.800 14:59:33 -- dd/common.sh@14 -- # local bs=1048576 00:09:02.800 14:59:33 -- dd/common.sh@15 -- # local count=5 00:09:02.800 14:59:33 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:09:02.800 14:59:33 -- dd/common.sh@18 -- # gen_conf 00:09:02.800 14:59:33 -- dd/common.sh@31 -- # xtrace_disable 00:09:02.800 14:59:33 -- common/autotest_common.sh@10 -- # set +x 00:09:02.800 [2024-11-20 14:59:33.494331] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:02.800 [2024-11-20 14:59:33.494439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70786 ] 00:09:02.800 { 00:09:02.800 "subsystems": [ 00:09:02.800 { 00:09:02.800 "subsystem": "bdev", 00:09:02.800 "config": [ 00:09:02.800 { 00:09:02.800 "params": { 00:09:02.800 "trtype": "pcie", 00:09:02.800 "traddr": "0000:00:06.0", 00:09:02.800 "name": "Nvme0" 00:09:02.800 }, 00:09:02.800 "method": "bdev_nvme_attach_controller" 00:09:02.800 }, 00:09:02.800 { 00:09:02.800 "params": { 00:09:02.800 "trtype": "pcie", 00:09:02.800 "traddr": "0000:00:07.0", 00:09:02.800 "name": "Nvme1" 00:09:02.800 }, 00:09:02.800 "method": "bdev_nvme_attach_controller" 00:09:02.800 }, 00:09:02.800 { 00:09:02.800 "method": "bdev_wait_for_examine" 00:09:02.800 } 00:09:02.800 ] 00:09:02.800 } 00:09:02.800 ] 00:09:02.800 } 00:09:03.072 [2024-11-20 14:59:33.625955] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.072 [2024-11-20 14:59:33.668775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.072  [2024-11-20T14:59:34.134Z] Copying: 5120/5120 [kB] (average 1666 MBps) 00:09:03.330 00:09:03.330 14:59:34 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:09:03.330 14:59:34 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:09:03.330 14:59:34 -- dd/common.sh@11 -- # local nvme_ref= 00:09:03.330 14:59:34 -- dd/common.sh@12 -- # local size=4194330 00:09:03.330 14:59:34 -- dd/common.sh@14 -- # local bs=1048576 00:09:03.330 14:59:34 -- dd/common.sh@15 -- # local count=5 00:09:03.330 14:59:34 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:09:03.330 14:59:34 -- dd/common.sh@18 -- # gen_conf 00:09:03.330 14:59:34 -- dd/common.sh@31 -- # xtrace_disable 00:09:03.330 14:59:34 -- common/autotest_common.sh@10 -- # set +x 00:09:03.330 [2024-11-20 14:59:34.058905] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:03.330 [2024-11-20 14:59:34.059046] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70805 ] 00:09:03.330 { 00:09:03.330 "subsystems": [ 00:09:03.330 { 00:09:03.330 "subsystem": "bdev", 00:09:03.330 "config": [ 00:09:03.330 { 00:09:03.330 "params": { 00:09:03.330 "trtype": "pcie", 00:09:03.330 "traddr": "0000:00:06.0", 00:09:03.330 "name": "Nvme0" 00:09:03.330 }, 00:09:03.330 "method": "bdev_nvme_attach_controller" 00:09:03.330 }, 00:09:03.330 { 00:09:03.330 "params": { 00:09:03.330 "trtype": "pcie", 00:09:03.330 "traddr": "0000:00:07.0", 00:09:03.330 "name": "Nvme1" 00:09:03.330 }, 00:09:03.330 "method": "bdev_nvme_attach_controller" 00:09:03.330 }, 00:09:03.330 { 00:09:03.330 "method": "bdev_wait_for_examine" 00:09:03.330 } 00:09:03.330 ] 00:09:03.330 } 00:09:03.330 ] 00:09:03.330 } 00:09:03.588 [2024-11-20 14:59:34.194105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.588 [2024-11-20 14:59:34.240627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.846  [2024-11-20T14:59:34.650Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:09:03.846 00:09:03.846 14:59:34 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:09:03.846 ************************************ 00:09:03.846 END TEST spdk_dd_bdev_to_bdev 00:09:03.846 ************************************ 00:09:03.846 00:09:03.846 real 0m6.124s 00:09:03.846 user 0m4.376s 00:09:03.846 sys 0m1.254s 00:09:03.846 14:59:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:03.846 14:59:34 -- common/autotest_common.sh@10 -- # set +x 00:09:04.105 14:59:34 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:09:04.105 14:59:34 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:04.105 14:59:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:04.105 14:59:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:04.105 14:59:34 -- common/autotest_common.sh@10 -- # set +x 00:09:04.105 ************************************ 00:09:04.105 START TEST spdk_dd_uring 00:09:04.105 ************************************ 00:09:04.105 14:59:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:04.105 * Looking for test storage... 00:09:04.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:04.105 14:59:34 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:04.105 14:59:34 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:04.105 14:59:34 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:04.105 14:59:34 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:04.105 14:59:34 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:04.105 14:59:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:04.105 14:59:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:04.105 14:59:34 -- scripts/common.sh@335 -- # IFS=.-: 00:09:04.105 14:59:34 -- scripts/common.sh@335 -- # read -ra ver1 00:09:04.105 14:59:34 -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.105 14:59:34 -- scripts/common.sh@336 -- # read -ra ver2 00:09:04.105 14:59:34 -- scripts/common.sh@337 -- # local 'op=<' 00:09:04.105 14:59:34 -- scripts/common.sh@339 -- # ver1_l=2 00:09:04.105 14:59:34 -- scripts/common.sh@340 -- # ver2_l=1 00:09:04.105 14:59:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:04.105 14:59:34 -- scripts/common.sh@343 -- # case "$op" in 00:09:04.105 14:59:34 -- scripts/common.sh@344 -- # : 1 00:09:04.105 14:59:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:04.105 14:59:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.105 14:59:34 -- scripts/common.sh@364 -- # decimal 1 00:09:04.105 14:59:34 -- scripts/common.sh@352 -- # local d=1 00:09:04.105 14:59:34 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.105 14:59:34 -- scripts/common.sh@354 -- # echo 1 00:09:04.105 14:59:34 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:04.105 14:59:34 -- scripts/common.sh@365 -- # decimal 2 00:09:04.105 14:59:34 -- scripts/common.sh@352 -- # local d=2 00:09:04.105 14:59:34 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.105 14:59:34 -- scripts/common.sh@354 -- # echo 2 00:09:04.105 14:59:34 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:04.105 14:59:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:04.105 14:59:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:04.105 14:59:34 -- scripts/common.sh@367 -- # return 0 00:09:04.105 14:59:34 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.105 14:59:34 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:04.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.105 --rc genhtml_branch_coverage=1 00:09:04.105 --rc genhtml_function_coverage=1 00:09:04.105 --rc genhtml_legend=1 00:09:04.105 --rc geninfo_all_blocks=1 00:09:04.105 --rc geninfo_unexecuted_blocks=1 00:09:04.105 00:09:04.105 ' 00:09:04.105 14:59:34 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:04.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.105 --rc genhtml_branch_coverage=1 00:09:04.105 --rc genhtml_function_coverage=1 00:09:04.105 --rc genhtml_legend=1 00:09:04.105 --rc geninfo_all_blocks=1 00:09:04.105 --rc geninfo_unexecuted_blocks=1 00:09:04.105 00:09:04.105 ' 00:09:04.105 14:59:34 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:04.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.105 --rc genhtml_branch_coverage=1 00:09:04.105 --rc genhtml_function_coverage=1 00:09:04.105 --rc genhtml_legend=1 00:09:04.105 --rc geninfo_all_blocks=1 00:09:04.105 --rc geninfo_unexecuted_blocks=1 00:09:04.105 00:09:04.105 ' 00:09:04.105 14:59:34 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:04.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.105 --rc genhtml_branch_coverage=1 00:09:04.105 --rc genhtml_function_coverage=1 00:09:04.105 --rc genhtml_legend=1 00:09:04.105 --rc geninfo_all_blocks=1 00:09:04.105 --rc geninfo_unexecuted_blocks=1 00:09:04.105 00:09:04.105 ' 00:09:04.105 14:59:34 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:04.105 14:59:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.105 14:59:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.105 14:59:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.105 14:59:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.105 14:59:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.105 14:59:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.105 14:59:34 -- paths/export.sh@5 -- # export PATH 00:09:04.105 14:59:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.105 14:59:34 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:09:04.105 14:59:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:04.105 14:59:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:04.105 14:59:34 -- common/autotest_common.sh@10 -- # set +x 00:09:04.106 ************************************ 00:09:04.106 START TEST dd_uring_copy 00:09:04.106 ************************************ 00:09:04.106 14:59:34 -- common/autotest_common.sh@1114 -- # uring_zram_copy 00:09:04.106 14:59:34 -- dd/uring.sh@15 -- # local zram_dev_id 00:09:04.106 14:59:34 -- dd/uring.sh@16 -- # local magic 00:09:04.106 14:59:34 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:09:04.106 14:59:34 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:04.106 14:59:34 -- dd/uring.sh@19 -- # local verify_magic 00:09:04.106 14:59:34 -- dd/uring.sh@21 -- # init_zram 00:09:04.106 14:59:34 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:09:04.106 14:59:34 -- dd/common.sh@164 -- # return 00:09:04.106 14:59:34 -- dd/uring.sh@22 -- # create_zram_dev 00:09:04.106 14:59:34 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:09:04.106 14:59:34 -- dd/uring.sh@22 -- # zram_dev_id=1 00:09:04.106 14:59:34 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:09:04.106 14:59:34 -- dd/common.sh@181 -- # local id=1 00:09:04.106 14:59:34 -- dd/common.sh@182 -- # local size=512M 00:09:04.106 14:59:34 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:09:04.106 14:59:34 -- dd/common.sh@186 -- # echo 512M 00:09:04.106 14:59:34 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:09:04.106 14:59:34 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:09:04.106 14:59:34 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:09:04.106 14:59:34 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:09:04.106 14:59:34 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:04.106 14:59:34 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:09:04.106 14:59:34 -- dd/uring.sh@41 -- # gen_bytes 1024 00:09:04.106 14:59:34 -- dd/common.sh@98 -- # xtrace_disable 00:09:04.106 14:59:34 -- common/autotest_common.sh@10 -- # set +x 00:09:04.106 14:59:34 -- dd/uring.sh@41 -- # magic=4oa7la65zdo639k7rqb758ergxzl2vc7ooyf39p799ez1lfojjgvnxx0prll0r709tk1tuvo7j33a5tewradntoddmgbvj2jciotannenscdq8ajrcygrfsvv8vsherdzw3xqcvhdj2nefqf1uz6ud5kk9j2o3okccwoub28rp1vwh51nnlr6yxxds09uox3rmikbuilb6beum23a3czwlpefahc8ski3knr1ml5jew8v5pspmi0yaya3fbdxov2d8p5lzoxnmmolec3irunwsc4nnfe109xc8k8t1wsc60m0uv18f13a55fkrdvhoa8ovgzh08y58yoyv0600xur6yzjskdqo77aikf5dra6app00fgczkug00k01yewdhi2yixxyv7tbtie2za3wko28wiv7n3bpqprnan15kk8lizt23s4b5t83f18uwh0w7sty88pg8050lmbpprycnq03ctvp18757m27tipogk4ve2091gl2r2r9fb81sjwhtn5e9im2fkvix1zh1uzd4l1kj4dg5wfgq0m2gj8jw23pukyc0dpaboygfhlo7k0yot5l9f2oqjv5sqw55yav9xqndjgxdrwt7lnhi3p6m2yyx2rd5lw3nxtjo5s28p6p3k4t6rrlqha1kvrvqxy5b02pnxhy0se3j24rt62oh82t0ulfwxgzzae5klu60qmpuzqizbfe07s8foz95wx0e5rlmyy47ggox2aidjs70vp5y4t64b8o18gjsyunly4k8ai3qr6yvmt2ono9v75oui9yxns0nuuglpt3x3zb6hzgror7l9cbgq3gwvj203kmq4x7xjjnssd1b2xgkayvves11r541592k8cao4c7bidlhe9bu0o9tf0tfdmsa85c0ytp034x5bifbvpjyu4vaz47qb9lko6g28lrnummz9lq3awflelj9gsu1bpy4z85rt1lzqgfoxlgqa1un9ivmf8h6iiwx458gvu5hnbzof95crihbaohnrd84zz83k74qa 00:09:04.106 14:59:34 -- dd/uring.sh@42 -- # echo 4oa7la65zdo639k7rqb758ergxzl2vc7ooyf39p799ez1lfojjgvnxx0prll0r709tk1tuvo7j33a5tewradntoddmgbvj2jciotannenscdq8ajrcygrfsvv8vsherdzw3xqcvhdj2nefqf1uz6ud5kk9j2o3okccwoub28rp1vwh51nnlr6yxxds09uox3rmikbuilb6beum23a3czwlpefahc8ski3knr1ml5jew8v5pspmi0yaya3fbdxov2d8p5lzoxnmmolec3irunwsc4nnfe109xc8k8t1wsc60m0uv18f13a55fkrdvhoa8ovgzh08y58yoyv0600xur6yzjskdqo77aikf5dra6app00fgczkug00k01yewdhi2yixxyv7tbtie2za3wko28wiv7n3bpqprnan15kk8lizt23s4b5t83f18uwh0w7sty88pg8050lmbpprycnq03ctvp18757m27tipogk4ve2091gl2r2r9fb81sjwhtn5e9im2fkvix1zh1uzd4l1kj4dg5wfgq0m2gj8jw23pukyc0dpaboygfhlo7k0yot5l9f2oqjv5sqw55yav9xqndjgxdrwt7lnhi3p6m2yyx2rd5lw3nxtjo5s28p6p3k4t6rrlqha1kvrvqxy5b02pnxhy0se3j24rt62oh82t0ulfwxgzzae5klu60qmpuzqizbfe07s8foz95wx0e5rlmyy47ggox2aidjs70vp5y4t64b8o18gjsyunly4k8ai3qr6yvmt2ono9v75oui9yxns0nuuglpt3x3zb6hzgror7l9cbgq3gwvj203kmq4x7xjjnssd1b2xgkayvves11r541592k8cao4c7bidlhe9bu0o9tf0tfdmsa85c0ytp034x5bifbvpjyu4vaz47qb9lko6g28lrnummz9lq3awflelj9gsu1bpy4z85rt1lzqgfoxlgqa1un9ivmf8h6iiwx458gvu5hnbzof95crihbaohnrd84zz83k74qa 00:09:04.106 14:59:34 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:09:04.364 [2024-11-20 14:59:34.961512] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:04.364 [2024-11-20 14:59:34.961912] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70877 ] 00:09:04.364 [2024-11-20 14:59:35.098959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.364 [2024-11-20 14:59:35.133814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.930  [2024-11-20T14:59:35.992Z] Copying: 511/511 [MB] (average 1570 MBps) 00:09:05.188 00:09:05.188 14:59:35 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:09:05.188 14:59:35 -- dd/uring.sh@54 -- # gen_conf 00:09:05.188 14:59:35 -- dd/common.sh@31 -- # xtrace_disable 00:09:05.188 14:59:35 -- common/autotest_common.sh@10 -- # set +x 00:09:05.188 [2024-11-20 14:59:35.880096] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:05.188 [2024-11-20 14:59:35.880392] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70885 ] 00:09:05.188 { 00:09:05.188 "subsystems": [ 00:09:05.188 { 00:09:05.188 "subsystem": "bdev", 00:09:05.188 "config": [ 00:09:05.188 { 00:09:05.188 "params": { 00:09:05.188 "block_size": 512, 00:09:05.188 "num_blocks": 1048576, 00:09:05.188 "name": "malloc0" 00:09:05.188 }, 00:09:05.188 "method": "bdev_malloc_create" 00:09:05.188 }, 00:09:05.188 { 00:09:05.188 "params": { 00:09:05.188 "filename": "/dev/zram1", 00:09:05.188 "name": "uring0" 00:09:05.188 }, 00:09:05.188 "method": "bdev_uring_create" 00:09:05.188 }, 00:09:05.188 { 00:09:05.188 "method": "bdev_wait_for_examine" 00:09:05.188 } 00:09:05.188 ] 00:09:05.188 } 00:09:05.188 ] 00:09:05.188 } 00:09:05.447 [2024-11-20 14:59:36.009001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.447 [2024-11-20 14:59:36.045052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.826  [2024-11-20T14:59:38.197Z] Copying: 203/512 [MB] (203 MBps) [2024-11-20T14:59:39.131Z] Copying: 405/512 [MB] (202 MBps) [2024-11-20T14:59:39.131Z] Copying: 512/512 [MB] (average 198 MBps) 00:09:08.327 00:09:08.327 14:59:39 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:09:08.327 14:59:39 -- dd/uring.sh@60 -- # gen_conf 00:09:08.327 14:59:39 -- dd/common.sh@31 -- # xtrace_disable 00:09:08.327 14:59:39 -- common/autotest_common.sh@10 -- # set +x 00:09:08.327 [2024-11-20 14:59:39.101009] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:08.327 [2024-11-20 14:59:39.101445] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70934 ] 00:09:08.327 { 00:09:08.327 "subsystems": [ 00:09:08.327 { 00:09:08.327 "subsystem": "bdev", 00:09:08.327 "config": [ 00:09:08.327 { 00:09:08.327 "params": { 00:09:08.327 "block_size": 512, 00:09:08.327 "num_blocks": 1048576, 00:09:08.327 "name": "malloc0" 00:09:08.327 }, 00:09:08.327 "method": "bdev_malloc_create" 00:09:08.327 }, 00:09:08.327 { 00:09:08.327 "params": { 00:09:08.327 "filename": "/dev/zram1", 00:09:08.327 "name": "uring0" 00:09:08.327 }, 00:09:08.327 "method": "bdev_uring_create" 00:09:08.327 }, 00:09:08.327 { 00:09:08.327 "method": "bdev_wait_for_examine" 00:09:08.327 } 00:09:08.327 ] 00:09:08.327 } 00:09:08.327 ] 00:09:08.327 } 00:09:08.586 [2024-11-20 14:59:39.236568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.586 [2024-11-20 14:59:39.271442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.962  [2024-11-20T14:59:41.739Z] Copying: 137/512 [MB] (137 MBps) [2024-11-20T14:59:42.674Z] Copying: 263/512 [MB] (126 MBps) [2024-11-20T14:59:43.610Z] Copying: 397/512 [MB] (134 MBps) [2024-11-20T14:59:43.868Z] Copying: 512/512 [MB] (average 129 MBps) 00:09:13.064 00:09:13.064 14:59:43 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:09:13.064 14:59:43 -- dd/uring.sh@66 -- # [[ 4oa7la65zdo639k7rqb758ergxzl2vc7ooyf39p799ez1lfojjgvnxx0prll0r709tk1tuvo7j33a5tewradntoddmgbvj2jciotannenscdq8ajrcygrfsvv8vsherdzw3xqcvhdj2nefqf1uz6ud5kk9j2o3okccwoub28rp1vwh51nnlr6yxxds09uox3rmikbuilb6beum23a3czwlpefahc8ski3knr1ml5jew8v5pspmi0yaya3fbdxov2d8p5lzoxnmmolec3irunwsc4nnfe109xc8k8t1wsc60m0uv18f13a55fkrdvhoa8ovgzh08y58yoyv0600xur6yzjskdqo77aikf5dra6app00fgczkug00k01yewdhi2yixxyv7tbtie2za3wko28wiv7n3bpqprnan15kk8lizt23s4b5t83f18uwh0w7sty88pg8050lmbpprycnq03ctvp18757m27tipogk4ve2091gl2r2r9fb81sjwhtn5e9im2fkvix1zh1uzd4l1kj4dg5wfgq0m2gj8jw23pukyc0dpaboygfhlo7k0yot5l9f2oqjv5sqw55yav9xqndjgxdrwt7lnhi3p6m2yyx2rd5lw3nxtjo5s28p6p3k4t6rrlqha1kvrvqxy5b02pnxhy0se3j24rt62oh82t0ulfwxgzzae5klu60qmpuzqizbfe07s8foz95wx0e5rlmyy47ggox2aidjs70vp5y4t64b8o18gjsyunly4k8ai3qr6yvmt2ono9v75oui9yxns0nuuglpt3x3zb6hzgror7l9cbgq3gwvj203kmq4x7xjjnssd1b2xgkayvves11r541592k8cao4c7bidlhe9bu0o9tf0tfdmsa85c0ytp034x5bifbvpjyu4vaz47qb9lko6g28lrnummz9lq3awflelj9gsu1bpy4z85rt1lzqgfoxlgqa1un9ivmf8h6iiwx458gvu5hnbzof95crihbaohnrd84zz83k74qa == \4\o\a\7\l\a\6\5\z\d\o\6\3\9\k\7\r\q\b\7\5\8\e\r\g\x\z\l\2\v\c\7\o\o\y\f\3\9\p\7\9\9\e\z\1\l\f\o\j\j\g\v\n\x\x\0\p\r\l\l\0\r\7\0\9\t\k\1\t\u\v\o\7\j\3\3\a\5\t\e\w\r\a\d\n\t\o\d\d\m\g\b\v\j\2\j\c\i\o\t\a\n\n\e\n\s\c\d\q\8\a\j\r\c\y\g\r\f\s\v\v\8\v\s\h\e\r\d\z\w\3\x\q\c\v\h\d\j\2\n\e\f\q\f\1\u\z\6\u\d\5\k\k\9\j\2\o\3\o\k\c\c\w\o\u\b\2\8\r\p\1\v\w\h\5\1\n\n\l\r\6\y\x\x\d\s\0\9\u\o\x\3\r\m\i\k\b\u\i\l\b\6\b\e\u\m\2\3\a\3\c\z\w\l\p\e\f\a\h\c\8\s\k\i\3\k\n\r\1\m\l\5\j\e\w\8\v\5\p\s\p\m\i\0\y\a\y\a\3\f\b\d\x\o\v\2\d\8\p\5\l\z\o\x\n\m\m\o\l\e\c\3\i\r\u\n\w\s\c\4\n\n\f\e\1\0\9\x\c\8\k\8\t\1\w\s\c\6\0\m\0\u\v\1\8\f\1\3\a\5\5\f\k\r\d\v\h\o\a\8\o\v\g\z\h\0\8\y\5\8\y\o\y\v\0\6\0\0\x\u\r\6\y\z\j\s\k\d\q\o\7\7\a\i\k\f\5\d\r\a\6\a\p\p\0\0\f\g\c\z\k\u\g\0\0\k\0\1\y\e\w\d\h\i\2\y\i\x\x\y\v\7\t\b\t\i\e\2\z\a\3\w\k\o\2\8\w\i\v\7\n\3\b\p\q\p\r\n\a\n\1\5\k\k\8\l\i\z\t\2\3\s\4\b\5\t\8\3\f\1\8\u\w\h\0\w\7\s\t\y\8\8\p\g\8\0\5\0\l\m\b\p\p\r\y\c\n\q\0\3\c\t\v\p\1\8\7\5\7\m\2\7\t\i\p\o\g\k\4\v\e\2\0\9\1\g\l\2\r\2\r\9\f\b\8\1\s\j\w\h\t\n\5\e\9\i\m\2\f\k\v\i\x\1\z\h\1\u\z\d\4\l\1\k\j\4\d\g\5\w\f\g\q\0\m\2\g\j\8\j\w\2\3\p\u\k\y\c\0\d\p\a\b\o\y\g\f\h\l\o\7\k\0\y\o\t\5\l\9\f\2\o\q\j\v\5\s\q\w\5\5\y\a\v\9\x\q\n\d\j\g\x\d\r\w\t\7\l\n\h\i\3\p\6\m\2\y\y\x\2\r\d\5\l\w\3\n\x\t\j\o\5\s\2\8\p\6\p\3\k\4\t\6\r\r\l\q\h\a\1\k\v\r\v\q\x\y\5\b\0\2\p\n\x\h\y\0\s\e\3\j\2\4\r\t\6\2\o\h\8\2\t\0\u\l\f\w\x\g\z\z\a\e\5\k\l\u\6\0\q\m\p\u\z\q\i\z\b\f\e\0\7\s\8\f\o\z\9\5\w\x\0\e\5\r\l\m\y\y\4\7\g\g\o\x\2\a\i\d\j\s\7\0\v\p\5\y\4\t\6\4\b\8\o\1\8\g\j\s\y\u\n\l\y\4\k\8\a\i\3\q\r\6\y\v\m\t\2\o\n\o\9\v\7\5\o\u\i\9\y\x\n\s\0\n\u\u\g\l\p\t\3\x\3\z\b\6\h\z\g\r\o\r\7\l\9\c\b\g\q\3\g\w\v\j\2\0\3\k\m\q\4\x\7\x\j\j\n\s\s\d\1\b\2\x\g\k\a\y\v\v\e\s\1\1\r\5\4\1\5\9\2\k\8\c\a\o\4\c\7\b\i\d\l\h\e\9\b\u\0\o\9\t\f\0\t\f\d\m\s\a\8\5\c\0\y\t\p\0\3\4\x\5\b\i\f\b\v\p\j\y\u\4\v\a\z\4\7\q\b\9\l\k\o\6\g\2\8\l\r\n\u\m\m\z\9\l\q\3\a\w\f\l\e\l\j\9\g\s\u\1\b\p\y\4\z\8\5\r\t\1\l\z\q\g\f\o\x\l\g\q\a\1\u\n\9\i\v\m\f\8\h\6\i\i\w\x\4\5\8\g\v\u\5\h\n\b\z\o\f\9\5\c\r\i\h\b\a\o\h\n\r\d\8\4\z\z\8\3\k\7\4\q\a ]] 00:09:13.065 14:59:43 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:09:13.065 14:59:43 -- dd/uring.sh@69 -- # [[ 4oa7la65zdo639k7rqb758ergxzl2vc7ooyf39p799ez1lfojjgvnxx0prll0r709tk1tuvo7j33a5tewradntoddmgbvj2jciotannenscdq8ajrcygrfsvv8vsherdzw3xqcvhdj2nefqf1uz6ud5kk9j2o3okccwoub28rp1vwh51nnlr6yxxds09uox3rmikbuilb6beum23a3czwlpefahc8ski3knr1ml5jew8v5pspmi0yaya3fbdxov2d8p5lzoxnmmolec3irunwsc4nnfe109xc8k8t1wsc60m0uv18f13a55fkrdvhoa8ovgzh08y58yoyv0600xur6yzjskdqo77aikf5dra6app00fgczkug00k01yewdhi2yixxyv7tbtie2za3wko28wiv7n3bpqprnan15kk8lizt23s4b5t83f18uwh0w7sty88pg8050lmbpprycnq03ctvp18757m27tipogk4ve2091gl2r2r9fb81sjwhtn5e9im2fkvix1zh1uzd4l1kj4dg5wfgq0m2gj8jw23pukyc0dpaboygfhlo7k0yot5l9f2oqjv5sqw55yav9xqndjgxdrwt7lnhi3p6m2yyx2rd5lw3nxtjo5s28p6p3k4t6rrlqha1kvrvqxy5b02pnxhy0se3j24rt62oh82t0ulfwxgzzae5klu60qmpuzqizbfe07s8foz95wx0e5rlmyy47ggox2aidjs70vp5y4t64b8o18gjsyunly4k8ai3qr6yvmt2ono9v75oui9yxns0nuuglpt3x3zb6hzgror7l9cbgq3gwvj203kmq4x7xjjnssd1b2xgkayvves11r541592k8cao4c7bidlhe9bu0o9tf0tfdmsa85c0ytp034x5bifbvpjyu4vaz47qb9lko6g28lrnummz9lq3awflelj9gsu1bpy4z85rt1lzqgfoxlgqa1un9ivmf8h6iiwx458gvu5hnbzof95crihbaohnrd84zz83k74qa == \4\o\a\7\l\a\6\5\z\d\o\6\3\9\k\7\r\q\b\7\5\8\e\r\g\x\z\l\2\v\c\7\o\o\y\f\3\9\p\7\9\9\e\z\1\l\f\o\j\j\g\v\n\x\x\0\p\r\l\l\0\r\7\0\9\t\k\1\t\u\v\o\7\j\3\3\a\5\t\e\w\r\a\d\n\t\o\d\d\m\g\b\v\j\2\j\c\i\o\t\a\n\n\e\n\s\c\d\q\8\a\j\r\c\y\g\r\f\s\v\v\8\v\s\h\e\r\d\z\w\3\x\q\c\v\h\d\j\2\n\e\f\q\f\1\u\z\6\u\d\5\k\k\9\j\2\o\3\o\k\c\c\w\o\u\b\2\8\r\p\1\v\w\h\5\1\n\n\l\r\6\y\x\x\d\s\0\9\u\o\x\3\r\m\i\k\b\u\i\l\b\6\b\e\u\m\2\3\a\3\c\z\w\l\p\e\f\a\h\c\8\s\k\i\3\k\n\r\1\m\l\5\j\e\w\8\v\5\p\s\p\m\i\0\y\a\y\a\3\f\b\d\x\o\v\2\d\8\p\5\l\z\o\x\n\m\m\o\l\e\c\3\i\r\u\n\w\s\c\4\n\n\f\e\1\0\9\x\c\8\k\8\t\1\w\s\c\6\0\m\0\u\v\1\8\f\1\3\a\5\5\f\k\r\d\v\h\o\a\8\o\v\g\z\h\0\8\y\5\8\y\o\y\v\0\6\0\0\x\u\r\6\y\z\j\s\k\d\q\o\7\7\a\i\k\f\5\d\r\a\6\a\p\p\0\0\f\g\c\z\k\u\g\0\0\k\0\1\y\e\w\d\h\i\2\y\i\x\x\y\v\7\t\b\t\i\e\2\z\a\3\w\k\o\2\8\w\i\v\7\n\3\b\p\q\p\r\n\a\n\1\5\k\k\8\l\i\z\t\2\3\s\4\b\5\t\8\3\f\1\8\u\w\h\0\w\7\s\t\y\8\8\p\g\8\0\5\0\l\m\b\p\p\r\y\c\n\q\0\3\c\t\v\p\1\8\7\5\7\m\2\7\t\i\p\o\g\k\4\v\e\2\0\9\1\g\l\2\r\2\r\9\f\b\8\1\s\j\w\h\t\n\5\e\9\i\m\2\f\k\v\i\x\1\z\h\1\u\z\d\4\l\1\k\j\4\d\g\5\w\f\g\q\0\m\2\g\j\8\j\w\2\3\p\u\k\y\c\0\d\p\a\b\o\y\g\f\h\l\o\7\k\0\y\o\t\5\l\9\f\2\o\q\j\v\5\s\q\w\5\5\y\a\v\9\x\q\n\d\j\g\x\d\r\w\t\7\l\n\h\i\3\p\6\m\2\y\y\x\2\r\d\5\l\w\3\n\x\t\j\o\5\s\2\8\p\6\p\3\k\4\t\6\r\r\l\q\h\a\1\k\v\r\v\q\x\y\5\b\0\2\p\n\x\h\y\0\s\e\3\j\2\4\r\t\6\2\o\h\8\2\t\0\u\l\f\w\x\g\z\z\a\e\5\k\l\u\6\0\q\m\p\u\z\q\i\z\b\f\e\0\7\s\8\f\o\z\9\5\w\x\0\e\5\r\l\m\y\y\4\7\g\g\o\x\2\a\i\d\j\s\7\0\v\p\5\y\4\t\6\4\b\8\o\1\8\g\j\s\y\u\n\l\y\4\k\8\a\i\3\q\r\6\y\v\m\t\2\o\n\o\9\v\7\5\o\u\i\9\y\x\n\s\0\n\u\u\g\l\p\t\3\x\3\z\b\6\h\z\g\r\o\r\7\l\9\c\b\g\q\3\g\w\v\j\2\0\3\k\m\q\4\x\7\x\j\j\n\s\s\d\1\b\2\x\g\k\a\y\v\v\e\s\1\1\r\5\4\1\5\9\2\k\8\c\a\o\4\c\7\b\i\d\l\h\e\9\b\u\0\o\9\t\f\0\t\f\d\m\s\a\8\5\c\0\y\t\p\0\3\4\x\5\b\i\f\b\v\p\j\y\u\4\v\a\z\4\7\q\b\9\l\k\o\6\g\2\8\l\r\n\u\m\m\z\9\l\q\3\a\w\f\l\e\l\j\9\g\s\u\1\b\p\y\4\z\8\5\r\t\1\l\z\q\g\f\o\x\l\g\q\a\1\u\n\9\i\v\m\f\8\h\6\i\i\w\x\4\5\8\g\v\u\5\h\n\b\z\o\f\9\5\c\r\i\h\b\a\o\h\n\r\d\8\4\z\z\8\3\k\7\4\q\a ]] 00:09:13.065 14:59:43 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:13.323 14:59:44 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:09:13.323 14:59:44 -- dd/uring.sh@75 -- # gen_conf 00:09:13.323 14:59:44 -- dd/common.sh@31 -- # xtrace_disable 00:09:13.323 14:59:44 -- common/autotest_common.sh@10 -- # set +x 00:09:13.323 [2024-11-20 14:59:44.051242] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:13.323 [2024-11-20 14:59:44.052032] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71016 ] 00:09:13.323 { 00:09:13.323 "subsystems": [ 00:09:13.323 { 00:09:13.323 "subsystem": "bdev", 00:09:13.323 "config": [ 00:09:13.323 { 00:09:13.323 "params": { 00:09:13.323 "block_size": 512, 00:09:13.323 "num_blocks": 1048576, 00:09:13.323 "name": "malloc0" 00:09:13.323 }, 00:09:13.323 "method": "bdev_malloc_create" 00:09:13.323 }, 00:09:13.323 { 00:09:13.323 "params": { 00:09:13.323 "filename": "/dev/zram1", 00:09:13.323 "name": "uring0" 00:09:13.323 }, 00:09:13.323 "method": "bdev_uring_create" 00:09:13.323 }, 00:09:13.323 { 00:09:13.323 "method": "bdev_wait_for_examine" 00:09:13.323 } 00:09:13.323 ] 00:09:13.323 } 00:09:13.323 ] 00:09:13.323 } 00:09:13.581 [2024-11-20 14:59:44.186195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.581 [2024-11-20 14:59:44.221237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.956  [2024-11-20T14:59:46.694Z] Copying: 133/512 [MB] (133 MBps) [2024-11-20T14:59:47.712Z] Copying: 272/512 [MB] (138 MBps) [2024-11-20T14:59:48.279Z] Copying: 414/512 [MB] (141 MBps) [2024-11-20T14:59:48.538Z] Copying: 512/512 [MB] (average 138 MBps) 00:09:17.734 00:09:17.734 14:59:48 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:09:17.734 14:59:48 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:09:17.734 14:59:48 -- dd/uring.sh@87 -- # : 00:09:17.734 14:59:48 -- dd/uring.sh@87 -- # : 00:09:17.734 14:59:48 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:09:17.734 14:59:48 -- dd/uring.sh@87 -- # gen_conf 00:09:17.734 14:59:48 -- dd/common.sh@31 -- # xtrace_disable 00:09:17.734 14:59:48 -- common/autotest_common.sh@10 -- # set +x 00:09:17.734 [2024-11-20 14:59:48.352622] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:17.734 [2024-11-20 14:59:48.353510] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71083 ] 00:09:17.734 { 00:09:17.734 "subsystems": [ 00:09:17.734 { 00:09:17.734 "subsystem": "bdev", 00:09:17.734 "config": [ 00:09:17.734 { 00:09:17.734 "params": { 00:09:17.734 "block_size": 512, 00:09:17.734 "num_blocks": 1048576, 00:09:17.734 "name": "malloc0" 00:09:17.734 }, 00:09:17.734 "method": "bdev_malloc_create" 00:09:17.734 }, 00:09:17.734 { 00:09:17.734 "params": { 00:09:17.734 "filename": "/dev/zram1", 00:09:17.734 "name": "uring0" 00:09:17.734 }, 00:09:17.734 "method": "bdev_uring_create" 00:09:17.734 }, 00:09:17.734 { 00:09:17.734 "params": { 00:09:17.734 "name": "uring0" 00:09:17.734 }, 00:09:17.734 "method": "bdev_uring_delete" 00:09:17.734 }, 00:09:17.734 { 00:09:17.734 "method": "bdev_wait_for_examine" 00:09:17.734 } 00:09:17.734 ] 00:09:17.734 } 00:09:17.734 ] 00:09:17.734 } 00:09:17.734 [2024-11-20 14:59:48.484811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.734 [2024-11-20 14:59:48.521433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.993  [2024-11-20T14:59:49.056Z] Copying: 0/0 [B] (average 0 Bps) 00:09:18.252 00:09:18.252 14:59:48 -- dd/uring.sh@94 -- # : 00:09:18.252 14:59:48 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:18.252 14:59:48 -- dd/uring.sh@94 -- # gen_conf 00:09:18.252 14:59:48 -- dd/common.sh@31 -- # xtrace_disable 00:09:18.252 14:59:48 -- common/autotest_common.sh@650 -- # local es=0 00:09:18.252 14:59:48 -- common/autotest_common.sh@10 -- # set +x 00:09:18.252 14:59:48 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:18.252 14:59:48 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:18.252 14:59:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:18.252 14:59:48 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:18.252 14:59:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:18.252 14:59:48 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:18.252 14:59:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:18.252 14:59:48 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:18.252 14:59:48 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:18.252 14:59:48 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:18.252 [2024-11-20 14:59:48.990496] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:18.252 [2024-11-20 14:59:48.990588] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71101 ] 00:09:18.252 { 00:09:18.252 "subsystems": [ 00:09:18.252 { 00:09:18.252 "subsystem": "bdev", 00:09:18.252 "config": [ 00:09:18.252 { 00:09:18.252 "params": { 00:09:18.252 "block_size": 512, 00:09:18.252 "num_blocks": 1048576, 00:09:18.252 "name": "malloc0" 00:09:18.252 }, 00:09:18.252 "method": "bdev_malloc_create" 00:09:18.252 }, 00:09:18.252 { 00:09:18.252 "params": { 00:09:18.252 "filename": "/dev/zram1", 00:09:18.252 "name": "uring0" 00:09:18.252 }, 00:09:18.252 "method": "bdev_uring_create" 00:09:18.252 }, 00:09:18.252 { 00:09:18.252 "params": { 00:09:18.252 "name": "uring0" 00:09:18.252 }, 00:09:18.252 "method": "bdev_uring_delete" 00:09:18.252 }, 00:09:18.252 { 00:09:18.252 "method": "bdev_wait_for_examine" 00:09:18.252 } 00:09:18.252 ] 00:09:18.252 } 00:09:18.252 ] 00:09:18.252 } 00:09:18.511 [2024-11-20 14:59:49.128043] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.511 [2024-11-20 14:59:49.163887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.770 [2024-11-20 14:59:49.316968] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:09:18.770 [2024-11-20 14:59:49.317231] spdk_dd.c: 932:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:09:18.770 [2024-11-20 14:59:49.317284] spdk_dd.c:1074:dd_run: *ERROR*: uring0: No such device 00:09:18.770 [2024-11-20 14:59:49.317400] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:18.770 [2024-11-20 14:59:49.484538] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:09:18.770 14:59:49 -- common/autotest_common.sh@653 -- # es=237 00:09:18.770 14:59:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:18.770 14:59:49 -- common/autotest_common.sh@662 -- # es=109 00:09:18.770 14:59:49 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:18.770 14:59:49 -- common/autotest_common.sh@670 -- # es=1 00:09:18.770 14:59:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:18.770 14:59:49 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:09:18.770 14:59:49 -- dd/common.sh@172 -- # local id=1 00:09:18.770 14:59:49 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:09:18.770 14:59:49 -- dd/common.sh@176 -- # echo 1 00:09:18.770 14:59:49 -- dd/common.sh@177 -- # echo 1 00:09:19.028 14:59:49 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:19.028 00:09:19.028 real 0m14.954s 00:09:19.028 user 0m8.599s 00:09:19.028 sys 0m5.591s 00:09:19.028 14:59:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:19.028 ************************************ 00:09:19.028 END TEST dd_uring_copy 00:09:19.028 ************************************ 00:09:19.028 14:59:49 -- common/autotest_common.sh@10 -- # set +x 00:09:19.287 ************************************ 00:09:19.287 END TEST spdk_dd_uring 00:09:19.287 ************************************ 00:09:19.287 00:09:19.287 real 0m15.179s 00:09:19.287 user 0m8.736s 00:09:19.287 sys 0m5.683s 00:09:19.287 14:59:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:19.287 14:59:49 -- common/autotest_common.sh@10 -- # set +x 00:09:19.287 14:59:49 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:19.287 14:59:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:19.287 14:59:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:19.287 14:59:49 -- common/autotest_common.sh@10 -- # set +x 00:09:19.287 ************************************ 00:09:19.287 START TEST spdk_dd_sparse 00:09:19.287 ************************************ 00:09:19.287 14:59:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:19.287 * Looking for test storage... 00:09:19.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:19.287 14:59:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:19.287 14:59:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:19.287 14:59:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:19.287 14:59:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:19.287 14:59:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:19.287 14:59:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:19.287 14:59:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:19.287 14:59:50 -- scripts/common.sh@335 -- # IFS=.-: 00:09:19.287 14:59:50 -- scripts/common.sh@335 -- # read -ra ver1 00:09:19.288 14:59:50 -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.288 14:59:50 -- scripts/common.sh@336 -- # read -ra ver2 00:09:19.288 14:59:50 -- scripts/common.sh@337 -- # local 'op=<' 00:09:19.288 14:59:50 -- scripts/common.sh@339 -- # ver1_l=2 00:09:19.288 14:59:50 -- scripts/common.sh@340 -- # ver2_l=1 00:09:19.288 14:59:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:19.288 14:59:50 -- scripts/common.sh@343 -- # case "$op" in 00:09:19.288 14:59:50 -- scripts/common.sh@344 -- # : 1 00:09:19.288 14:59:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:19.288 14:59:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.288 14:59:50 -- scripts/common.sh@364 -- # decimal 1 00:09:19.288 14:59:50 -- scripts/common.sh@352 -- # local d=1 00:09:19.288 14:59:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.288 14:59:50 -- scripts/common.sh@354 -- # echo 1 00:09:19.288 14:59:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:19.288 14:59:50 -- scripts/common.sh@365 -- # decimal 2 00:09:19.288 14:59:50 -- scripts/common.sh@352 -- # local d=2 00:09:19.288 14:59:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.288 14:59:50 -- scripts/common.sh@354 -- # echo 2 00:09:19.288 14:59:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:19.288 14:59:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:19.288 14:59:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:19.288 14:59:50 -- scripts/common.sh@367 -- # return 0 00:09:19.288 14:59:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.288 14:59:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:19.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.288 --rc genhtml_branch_coverage=1 00:09:19.288 --rc genhtml_function_coverage=1 00:09:19.288 --rc genhtml_legend=1 00:09:19.288 --rc geninfo_all_blocks=1 00:09:19.288 --rc geninfo_unexecuted_blocks=1 00:09:19.288 00:09:19.288 ' 00:09:19.288 14:59:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:19.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.288 --rc genhtml_branch_coverage=1 00:09:19.288 --rc genhtml_function_coverage=1 00:09:19.288 --rc genhtml_legend=1 00:09:19.288 --rc geninfo_all_blocks=1 00:09:19.288 --rc geninfo_unexecuted_blocks=1 00:09:19.288 00:09:19.288 ' 00:09:19.288 14:59:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:19.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.288 --rc genhtml_branch_coverage=1 00:09:19.288 --rc genhtml_function_coverage=1 00:09:19.288 --rc genhtml_legend=1 00:09:19.288 --rc geninfo_all_blocks=1 00:09:19.288 --rc geninfo_unexecuted_blocks=1 00:09:19.288 00:09:19.288 ' 00:09:19.288 14:59:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:19.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.288 --rc genhtml_branch_coverage=1 00:09:19.288 --rc genhtml_function_coverage=1 00:09:19.288 --rc genhtml_legend=1 00:09:19.288 --rc geninfo_all_blocks=1 00:09:19.288 --rc geninfo_unexecuted_blocks=1 00:09:19.288 00:09:19.288 ' 00:09:19.288 14:59:50 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:19.288 14:59:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.288 14:59:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.288 14:59:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.288 14:59:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.288 14:59:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.288 14:59:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.288 14:59:50 -- paths/export.sh@5 -- # export PATH 00:09:19.288 14:59:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.547 14:59:50 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:09:19.547 14:59:50 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:09:19.547 14:59:50 -- dd/sparse.sh@110 -- # file1=file_zero1 00:09:19.547 14:59:50 -- dd/sparse.sh@111 -- # file2=file_zero2 00:09:19.547 14:59:50 -- dd/sparse.sh@112 -- # file3=file_zero3 00:09:19.547 14:59:50 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:09:19.547 14:59:50 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:09:19.547 14:59:50 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:09:19.547 14:59:50 -- dd/sparse.sh@118 -- # prepare 00:09:19.547 14:59:50 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:09:19.547 14:59:50 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:09:19.547 1+0 records in 00:09:19.547 1+0 records out 00:09:19.547 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00540636 s, 776 MB/s 00:09:19.547 14:59:50 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:09:19.547 1+0 records in 00:09:19.547 1+0 records out 00:09:19.547 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00573121 s, 732 MB/s 00:09:19.547 14:59:50 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:09:19.547 1+0 records in 00:09:19.547 1+0 records out 00:09:19.547 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0056023 s, 749 MB/s 00:09:19.547 14:59:50 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:09:19.547 14:59:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:19.547 14:59:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:19.547 14:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:19.547 ************************************ 00:09:19.547 START TEST dd_sparse_file_to_file 00:09:19.547 ************************************ 00:09:19.547 14:59:50 -- common/autotest_common.sh@1114 -- # file_to_file 00:09:19.547 14:59:50 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:09:19.547 14:59:50 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:09:19.547 14:59:50 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:19.547 14:59:50 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:09:19.547 14:59:50 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:09:19.547 14:59:50 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:09:19.547 14:59:50 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:09:19.547 14:59:50 -- dd/sparse.sh@41 -- # gen_conf 00:09:19.547 14:59:50 -- dd/common.sh@31 -- # xtrace_disable 00:09:19.547 14:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:19.547 [2024-11-20 14:59:50.179316] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:19.547 [2024-11-20 14:59:50.179410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71202 ] 00:09:19.547 { 00:09:19.547 "subsystems": [ 00:09:19.547 { 00:09:19.547 "subsystem": "bdev", 00:09:19.547 "config": [ 00:09:19.547 { 00:09:19.547 "params": { 00:09:19.547 "block_size": 4096, 00:09:19.547 "filename": "dd_sparse_aio_disk", 00:09:19.547 "name": "dd_aio" 00:09:19.547 }, 00:09:19.547 "method": "bdev_aio_create" 00:09:19.547 }, 00:09:19.547 { 00:09:19.547 "params": { 00:09:19.547 "lvs_name": "dd_lvstore", 00:09:19.547 "bdev_name": "dd_aio" 00:09:19.547 }, 00:09:19.547 "method": "bdev_lvol_create_lvstore" 00:09:19.547 }, 00:09:19.547 { 00:09:19.547 "method": "bdev_wait_for_examine" 00:09:19.547 } 00:09:19.547 ] 00:09:19.547 } 00:09:19.547 ] 00:09:19.547 } 00:09:19.547 [2024-11-20 14:59:50.314132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.806 [2024-11-20 14:59:50.356929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.806  [2024-11-20T14:59:50.868Z] Copying: 12/36 [MB] (average 631 MBps) 00:09:20.064 00:09:20.064 14:59:50 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:09:20.064 14:59:50 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:09:20.064 14:59:50 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:09:20.064 14:59:50 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:09:20.064 14:59:50 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:20.064 14:59:50 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:09:20.064 14:59:50 -- dd/sparse.sh@52 -- # stat1_b=24576 00:09:20.064 14:59:50 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:09:20.064 14:59:50 -- dd/sparse.sh@53 -- # stat2_b=24576 00:09:20.064 14:59:50 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:20.064 00:09:20.064 real 0m0.558s 00:09:20.064 user 0m0.324s 00:09:20.064 sys 0m0.145s 00:09:20.064 14:59:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:20.064 14:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:20.064 ************************************ 00:09:20.064 END TEST dd_sparse_file_to_file 00:09:20.064 ************************************ 00:09:20.064 14:59:50 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:09:20.064 14:59:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:20.064 14:59:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:20.064 14:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:20.064 ************************************ 00:09:20.064 START TEST dd_sparse_file_to_bdev 00:09:20.064 ************************************ 00:09:20.064 14:59:50 -- common/autotest_common.sh@1114 -- # file_to_bdev 00:09:20.064 14:59:50 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:20.064 14:59:50 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:09:20.064 14:59:50 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:09:20.064 14:59:50 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:09:20.064 14:59:50 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:09:20.064 14:59:50 -- dd/sparse.sh@73 -- # gen_conf 00:09:20.064 14:59:50 -- dd/common.sh@31 -- # xtrace_disable 00:09:20.064 14:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:20.064 [2024-11-20 14:59:50.788180] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:20.064 [2024-11-20 14:59:50.788335] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71240 ] 00:09:20.064 { 00:09:20.064 "subsystems": [ 00:09:20.064 { 00:09:20.064 "subsystem": "bdev", 00:09:20.064 "config": [ 00:09:20.064 { 00:09:20.064 "params": { 00:09:20.064 "block_size": 4096, 00:09:20.064 "filename": "dd_sparse_aio_disk", 00:09:20.064 "name": "dd_aio" 00:09:20.064 }, 00:09:20.064 "method": "bdev_aio_create" 00:09:20.064 }, 00:09:20.064 { 00:09:20.064 "params": { 00:09:20.064 "lvs_name": "dd_lvstore", 00:09:20.064 "lvol_name": "dd_lvol", 00:09:20.064 "size": 37748736, 00:09:20.064 "thin_provision": true 00:09:20.064 }, 00:09:20.064 "method": "bdev_lvol_create" 00:09:20.064 }, 00:09:20.064 { 00:09:20.064 "method": "bdev_wait_for_examine" 00:09:20.064 } 00:09:20.064 ] 00:09:20.064 } 00:09:20.064 ] 00:09:20.064 } 00:09:20.323 [2024-11-20 14:59:50.928207] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.323 [2024-11-20 14:59:50.971685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.323 [2024-11-20 14:59:51.041262] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:09:20.323  [2024-11-20T14:59:51.127Z] Copying: 12/36 [MB] (average 631 MBps)[2024-11-20 14:59:51.076328] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:09:20.581 00:09:20.581 00:09:20.581 00:09:20.581 real 0m0.533s 00:09:20.581 user 0m0.310s 00:09:20.581 sys 0m0.152s 00:09:20.581 14:59:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:20.581 14:59:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.581 ************************************ 00:09:20.581 END TEST dd_sparse_file_to_bdev 00:09:20.581 ************************************ 00:09:20.581 14:59:51 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:09:20.581 14:59:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:20.581 14:59:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:20.581 14:59:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.581 ************************************ 00:09:20.581 START TEST dd_sparse_bdev_to_file 00:09:20.581 ************************************ 00:09:20.581 14:59:51 -- common/autotest_common.sh@1114 -- # bdev_to_file 00:09:20.581 14:59:51 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:09:20.581 14:59:51 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:09:20.581 14:59:51 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:20.581 14:59:51 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:09:20.581 14:59:51 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:09:20.581 14:59:51 -- dd/sparse.sh@91 -- # gen_conf 00:09:20.581 14:59:51 -- dd/common.sh@31 -- # xtrace_disable 00:09:20.581 14:59:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.581 [2024-11-20 14:59:51.349832] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:20.581 [2024-11-20 14:59:51.349936] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71276 ] 00:09:20.581 { 00:09:20.581 "subsystems": [ 00:09:20.581 { 00:09:20.581 "subsystem": "bdev", 00:09:20.581 "config": [ 00:09:20.581 { 00:09:20.581 "params": { 00:09:20.581 "block_size": 4096, 00:09:20.581 "filename": "dd_sparse_aio_disk", 00:09:20.581 "name": "dd_aio" 00:09:20.582 }, 00:09:20.582 "method": "bdev_aio_create" 00:09:20.582 }, 00:09:20.582 { 00:09:20.582 "method": "bdev_wait_for_examine" 00:09:20.582 } 00:09:20.582 ] 00:09:20.582 } 00:09:20.582 ] 00:09:20.582 } 00:09:20.840 [2024-11-20 14:59:51.483334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.840 [2024-11-20 14:59:51.520569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.840  [2024-11-20T14:59:51.903Z] Copying: 12/36 [MB] (average 1090 MBps) 00:09:21.099 00:09:21.099 14:59:51 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:09:21.099 14:59:51 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:09:21.099 14:59:51 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:09:21.099 14:59:51 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:09:21.099 14:59:51 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:21.099 14:59:51 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:09:21.099 14:59:51 -- dd/sparse.sh@102 -- # stat2_b=24576 00:09:21.099 14:59:51 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:09:21.099 14:59:51 -- dd/sparse.sh@103 -- # stat3_b=24576 00:09:21.099 14:59:51 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:21.099 00:09:21.099 real 0m0.495s 00:09:21.099 user 0m0.299s 00:09:21.099 sys 0m0.121s 00:09:21.099 14:59:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:21.099 14:59:51 -- common/autotest_common.sh@10 -- # set +x 00:09:21.099 ************************************ 00:09:21.099 END TEST dd_sparse_bdev_to_file 00:09:21.099 ************************************ 00:09:21.099 14:59:51 -- dd/sparse.sh@1 -- # cleanup 00:09:21.099 14:59:51 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:09:21.099 14:59:51 -- dd/sparse.sh@12 -- # rm file_zero1 00:09:21.099 14:59:51 -- dd/sparse.sh@13 -- # rm file_zero2 00:09:21.099 14:59:51 -- dd/sparse.sh@14 -- # rm file_zero3 00:09:21.099 00:09:21.099 real 0m1.943s 00:09:21.099 user 0m1.099s 00:09:21.099 sys 0m0.602s 00:09:21.099 14:59:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:21.099 14:59:51 -- common/autotest_common.sh@10 -- # set +x 00:09:21.099 ************************************ 00:09:21.099 END TEST spdk_dd_sparse 00:09:21.099 ************************************ 00:09:21.099 14:59:51 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:21.099 14:59:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:21.099 14:59:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:21.099 14:59:51 -- common/autotest_common.sh@10 -- # set +x 00:09:21.099 ************************************ 00:09:21.099 START TEST spdk_dd_negative 00:09:21.099 ************************************ 00:09:21.099 14:59:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:21.358 * Looking for test storage... 00:09:21.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:21.358 14:59:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:21.358 14:59:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:21.358 14:59:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:21.358 14:59:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:21.358 14:59:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:21.358 14:59:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:21.358 14:59:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:21.358 14:59:52 -- scripts/common.sh@335 -- # IFS=.-: 00:09:21.358 14:59:52 -- scripts/common.sh@335 -- # read -ra ver1 00:09:21.358 14:59:52 -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.358 14:59:52 -- scripts/common.sh@336 -- # read -ra ver2 00:09:21.359 14:59:52 -- scripts/common.sh@337 -- # local 'op=<' 00:09:21.359 14:59:52 -- scripts/common.sh@339 -- # ver1_l=2 00:09:21.359 14:59:52 -- scripts/common.sh@340 -- # ver2_l=1 00:09:21.359 14:59:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:21.359 14:59:52 -- scripts/common.sh@343 -- # case "$op" in 00:09:21.359 14:59:52 -- scripts/common.sh@344 -- # : 1 00:09:21.359 14:59:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:21.359 14:59:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.359 14:59:52 -- scripts/common.sh@364 -- # decimal 1 00:09:21.359 14:59:52 -- scripts/common.sh@352 -- # local d=1 00:09:21.359 14:59:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.359 14:59:52 -- scripts/common.sh@354 -- # echo 1 00:09:21.359 14:59:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:21.359 14:59:52 -- scripts/common.sh@365 -- # decimal 2 00:09:21.359 14:59:52 -- scripts/common.sh@352 -- # local d=2 00:09:21.359 14:59:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.359 14:59:52 -- scripts/common.sh@354 -- # echo 2 00:09:21.359 14:59:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:21.359 14:59:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:21.359 14:59:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:21.359 14:59:52 -- scripts/common.sh@367 -- # return 0 00:09:21.359 14:59:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.359 14:59:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:21.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.359 --rc genhtml_branch_coverage=1 00:09:21.359 --rc genhtml_function_coverage=1 00:09:21.359 --rc genhtml_legend=1 00:09:21.359 --rc geninfo_all_blocks=1 00:09:21.359 --rc geninfo_unexecuted_blocks=1 00:09:21.359 00:09:21.359 ' 00:09:21.359 14:59:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:21.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.359 --rc genhtml_branch_coverage=1 00:09:21.359 --rc genhtml_function_coverage=1 00:09:21.359 --rc genhtml_legend=1 00:09:21.359 --rc geninfo_all_blocks=1 00:09:21.359 --rc geninfo_unexecuted_blocks=1 00:09:21.359 00:09:21.359 ' 00:09:21.359 14:59:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:21.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.359 --rc genhtml_branch_coverage=1 00:09:21.359 --rc genhtml_function_coverage=1 00:09:21.359 --rc genhtml_legend=1 00:09:21.359 --rc geninfo_all_blocks=1 00:09:21.359 --rc geninfo_unexecuted_blocks=1 00:09:21.359 00:09:21.359 ' 00:09:21.359 14:59:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:21.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.359 --rc genhtml_branch_coverage=1 00:09:21.359 --rc genhtml_function_coverage=1 00:09:21.359 --rc genhtml_legend=1 00:09:21.359 --rc geninfo_all_blocks=1 00:09:21.359 --rc geninfo_unexecuted_blocks=1 00:09:21.359 00:09:21.359 ' 00:09:21.359 14:59:52 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:21.359 14:59:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.359 14:59:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.359 14:59:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.359 14:59:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.359 14:59:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.359 14:59:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.359 14:59:52 -- paths/export.sh@5 -- # export PATH 00:09:21.359 14:59:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.360 14:59:52 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:21.360 14:59:52 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:21.360 14:59:52 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:21.360 14:59:52 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:21.360 14:59:52 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:09:21.360 14:59:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:21.360 14:59:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:21.360 14:59:52 -- common/autotest_common.sh@10 -- # set +x 00:09:21.360 ************************************ 00:09:21.360 START TEST dd_invalid_arguments 00:09:21.360 ************************************ 00:09:21.360 14:59:52 -- common/autotest_common.sh@1114 -- # invalid_arguments 00:09:21.360 14:59:52 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:21.360 14:59:52 -- common/autotest_common.sh@650 -- # local es=0 00:09:21.360 14:59:52 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:21.360 14:59:52 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.360 14:59:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.360 14:59:52 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.360 14:59:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.360 14:59:52 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.360 14:59:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.360 14:59:52 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.360 14:59:52 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:21.360 14:59:52 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:21.620 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:09:21.620 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:09:21.620 options: 00:09:21.620 -c, --config JSON config file (default none) 00:09:21.620 --json JSON config file (default none) 00:09:21.620 --json-ignore-init-errors 00:09:21.620 don't exit on invalid config entry 00:09:21.620 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:21.620 -g, --single-file-segments 00:09:21.620 force creating just one hugetlbfs file 00:09:21.620 -h, --help show this usage 00:09:21.620 -i, --shm-id shared memory ID (optional) 00:09:21.620 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:09:21.620 --lcores lcore to CPU mapping list. The list is in the format: 00:09:21.620 [<,lcores[@CPUs]>...] 00:09:21.620 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:21.620 Within the group, '-' is used for range separator, 00:09:21.620 ',' is used for single number separator. 00:09:21.620 '( )' can be omitted for single element group, 00:09:21.620 '@' can be omitted if cpus and lcores have the same value 00:09:21.620 -n, --mem-channels channel number of memory channels used for DPDK 00:09:21.620 -p, --main-core main (primary) core for DPDK 00:09:21.620 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:21.620 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:21.620 --disable-cpumask-locks Disable CPU core lock files. 00:09:21.620 --silence-noticelog disable notice level logging to stderr 00:09:21.620 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:21.620 -u, --no-pci disable PCI access 00:09:21.620 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:21.620 --max-delay maximum reactor delay (in microseconds) 00:09:21.620 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:21.620 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:21.620 -R, --huge-unlink unlink huge files after initialization 00:09:21.620 -v, --version print SPDK version 00:09:21.620 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:21.620 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:21.620 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:21.620 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:09:21.620 Tracepoints vary in size and can use more than one trace entry. 00:09:21.620 --rpcs-allowed comma-separated list of permitted RPCS 00:09:21.620 --env-context Opaque context for use of the env implementation 00:09:21.620 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:21.620 --no-huge run without using hugepages 00:09:21.620 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:09:21.620 -e, --tpoint-group [:] 00:09:21.620 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:09:21.620 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:09:21.620 Groups and masks [2024-11-20 14:59:52.173666] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:09:21.620 can be combined (e.g. thread,bdev:0x1). 00:09:21.620 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:09:21.620 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:09:21.620 [--------- DD Options ---------] 00:09:21.620 --if Input file. Must specify either --if or --ib. 00:09:21.620 --ib Input bdev. Must specifier either --if or --ib 00:09:21.620 --of Output file. Must specify either --of or --ob. 00:09:21.620 --ob Output bdev. Must specify either --of or --ob. 00:09:21.620 --iflag Input file flags. 00:09:21.620 --oflag Output file flags. 00:09:21.620 --bs I/O unit size (default: 4096) 00:09:21.621 --qd Queue depth (default: 2) 00:09:21.621 --count I/O unit count. The number of I/O units to copy. (default: all) 00:09:21.621 --skip Skip this many I/O units at start of input. (default: 0) 00:09:21.621 --seek Skip this many I/O units at start of output. (default: 0) 00:09:21.621 --aio Force usage of AIO. (by default io_uring is used if available) 00:09:21.621 --sparse Enable hole skipping in input target 00:09:21.621 Available iflag and oflag values: 00:09:21.621 append - append mode 00:09:21.621 direct - use direct I/O for data 00:09:21.621 directory - fail unless a directory 00:09:21.621 dsync - use synchronized I/O for data 00:09:21.621 noatime - do not update access time 00:09:21.621 noctty - do not assign controlling terminal from file 00:09:21.621 nofollow - do not follow symlinks 00:09:21.621 nonblock - use non-blocking I/O 00:09:21.621 sync - use synchronized I/O for data and metadata 00:09:21.621 14:59:52 -- common/autotest_common.sh@653 -- # es=2 00:09:21.621 14:59:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:21.621 14:59:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:21.621 14:59:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:21.621 00:09:21.621 real 0m0.082s 00:09:21.621 user 0m0.054s 00:09:21.621 sys 0m0.025s 00:09:21.621 14:59:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:21.621 14:59:52 -- common/autotest_common.sh@10 -- # set +x 00:09:21.621 ************************************ 00:09:21.621 END TEST dd_invalid_arguments 00:09:21.621 ************************************ 00:09:21.621 14:59:52 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:09:21.621 14:59:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:21.621 14:59:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:21.621 14:59:52 -- common/autotest_common.sh@10 -- # set +x 00:09:21.621 ************************************ 00:09:21.621 START TEST dd_double_input 00:09:21.621 ************************************ 00:09:21.621 14:59:52 -- common/autotest_common.sh@1114 -- # double_input 00:09:21.621 14:59:52 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:21.621 14:59:52 -- common/autotest_common.sh@650 -- # local es=0 00:09:21.621 14:59:52 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:21.621 14:59:52 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.621 14:59:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.621 14:59:52 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.621 14:59:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.621 14:59:52 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.621 14:59:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.621 14:59:52 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.621 14:59:52 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:21.621 14:59:52 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:21.621 [2024-11-20 14:59:52.298212] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:09:21.621 14:59:52 -- common/autotest_common.sh@653 -- # es=22 00:09:21.621 14:59:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:21.621 14:59:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:21.621 14:59:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:21.621 00:09:21.621 real 0m0.082s 00:09:21.621 user 0m0.042s 00:09:21.621 sys 0m0.039s 00:09:21.621 14:59:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:21.621 14:59:52 -- common/autotest_common.sh@10 -- # set +x 00:09:21.621 ************************************ 00:09:21.621 END TEST dd_double_input 00:09:21.621 ************************************ 00:09:21.621 14:59:52 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:09:21.621 14:59:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:21.621 14:59:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:21.621 14:59:52 -- common/autotest_common.sh@10 -- # set +x 00:09:21.621 ************************************ 00:09:21.621 START TEST dd_double_output 00:09:21.621 ************************************ 00:09:21.621 14:59:52 -- common/autotest_common.sh@1114 -- # double_output 00:09:21.621 14:59:52 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:21.621 14:59:52 -- common/autotest_common.sh@650 -- # local es=0 00:09:21.621 14:59:52 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:21.621 14:59:52 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.621 14:59:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.621 14:59:52 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.621 14:59:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.621 14:59:52 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.621 14:59:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.621 14:59:52 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.621 14:59:52 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:21.621 14:59:52 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:21.621 [2024-11-20 14:59:52.414971] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:09:21.880 14:59:52 -- common/autotest_common.sh@653 -- # es=22 00:09:21.880 14:59:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:21.880 14:59:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:21.880 14:59:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:21.880 00:09:21.880 real 0m0.064s 00:09:21.880 user 0m0.037s 00:09:21.880 sys 0m0.026s 00:09:21.880 14:59:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:21.880 14:59:52 -- common/autotest_common.sh@10 -- # set +x 00:09:21.880 ************************************ 00:09:21.880 END TEST dd_double_output 00:09:21.880 ************************************ 00:09:21.880 14:59:52 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:09:21.880 14:59:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:21.880 14:59:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:21.880 14:59:52 -- common/autotest_common.sh@10 -- # set +x 00:09:21.880 ************************************ 00:09:21.881 START TEST dd_no_input 00:09:21.881 ************************************ 00:09:21.881 14:59:52 -- common/autotest_common.sh@1114 -- # no_input 00:09:21.881 14:59:52 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:21.881 14:59:52 -- common/autotest_common.sh@650 -- # local es=0 00:09:21.881 14:59:52 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:21.881 14:59:52 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.881 14:59:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.881 14:59:52 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.881 14:59:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.881 14:59:52 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.881 14:59:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.881 14:59:52 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.881 14:59:52 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:21.881 14:59:52 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:21.881 [2024-11-20 14:59:52.519763] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:09:21.881 14:59:52 -- common/autotest_common.sh@653 -- # es=22 00:09:21.881 14:59:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:21.881 14:59:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:21.881 14:59:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:21.881 00:09:21.881 real 0m0.060s 00:09:21.881 user 0m0.035s 00:09:21.881 sys 0m0.024s 00:09:21.881 14:59:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:21.881 14:59:52 -- common/autotest_common.sh@10 -- # set +x 00:09:21.881 ************************************ 00:09:21.881 END TEST dd_no_input 00:09:21.881 ************************************ 00:09:21.881 14:59:52 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:09:21.881 14:59:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:21.881 14:59:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:21.881 14:59:52 -- common/autotest_common.sh@10 -- # set +x 00:09:21.881 ************************************ 00:09:21.881 START TEST dd_no_output 00:09:21.881 ************************************ 00:09:21.881 14:59:52 -- common/autotest_common.sh@1114 -- # no_output 00:09:21.881 14:59:52 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:21.881 14:59:52 -- common/autotest_common.sh@650 -- # local es=0 00:09:21.881 14:59:52 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:21.881 14:59:52 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.881 14:59:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.881 14:59:52 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.881 14:59:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.881 14:59:52 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.881 14:59:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.881 14:59:52 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:21.881 14:59:52 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:21.881 14:59:52 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:21.881 [2024-11-20 14:59:52.638611] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:09:21.881 14:59:52 -- common/autotest_common.sh@653 -- # es=22 00:09:21.881 14:59:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:21.881 14:59:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:21.881 14:59:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:21.881 00:09:21.881 real 0m0.084s 00:09:21.881 user 0m0.048s 00:09:21.881 sys 0m0.034s 00:09:21.881 14:59:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:21.881 14:59:52 -- common/autotest_common.sh@10 -- # set +x 00:09:21.881 ************************************ 00:09:21.881 END TEST dd_no_output 00:09:21.881 ************************************ 00:09:22.140 14:59:52 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:09:22.140 14:59:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:22.140 14:59:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.140 14:59:52 -- common/autotest_common.sh@10 -- # set +x 00:09:22.140 ************************************ 00:09:22.140 START TEST dd_wrong_blocksize 00:09:22.140 ************************************ 00:09:22.140 14:59:52 -- common/autotest_common.sh@1114 -- # wrong_blocksize 00:09:22.140 14:59:52 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:22.140 14:59:52 -- common/autotest_common.sh@650 -- # local es=0 00:09:22.140 14:59:52 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:22.140 14:59:52 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.140 14:59:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.140 14:59:52 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.140 14:59:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.140 14:59:52 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.140 14:59:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.140 14:59:52 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.140 14:59:52 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:22.140 14:59:52 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:22.140 [2024-11-20 14:59:52.752018] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:09:22.140 14:59:52 -- common/autotest_common.sh@653 -- # es=22 00:09:22.140 14:59:52 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:22.140 14:59:52 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:22.140 14:59:52 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:22.140 00:09:22.140 real 0m0.060s 00:09:22.140 user 0m0.034s 00:09:22.140 sys 0m0.025s 00:09:22.140 14:59:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:22.140 14:59:52 -- common/autotest_common.sh@10 -- # set +x 00:09:22.140 ************************************ 00:09:22.140 END TEST dd_wrong_blocksize 00:09:22.140 ************************************ 00:09:22.140 14:59:52 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:09:22.140 14:59:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:22.140 14:59:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.140 14:59:52 -- common/autotest_common.sh@10 -- # set +x 00:09:22.140 ************************************ 00:09:22.140 START TEST dd_smaller_blocksize 00:09:22.140 ************************************ 00:09:22.140 14:59:52 -- common/autotest_common.sh@1114 -- # smaller_blocksize 00:09:22.140 14:59:52 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:22.140 14:59:52 -- common/autotest_common.sh@650 -- # local es=0 00:09:22.140 14:59:52 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:22.140 14:59:52 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.140 14:59:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.140 14:59:52 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.140 14:59:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.140 14:59:52 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.140 14:59:52 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.140 14:59:52 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.140 14:59:52 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:22.140 14:59:52 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:22.140 [2024-11-20 14:59:52.862294] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:22.140 [2024-11-20 14:59:52.862443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71495 ] 00:09:22.399 [2024-11-20 14:59:52.999696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.399 [2024-11-20 14:59:53.038481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.399 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:22.399 [2024-11-20 14:59:53.085396] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:09:22.399 [2024-11-20 14:59:53.085424] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:22.399 [2024-11-20 14:59:53.149765] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:09:22.659 14:59:53 -- common/autotest_common.sh@653 -- # es=244 00:09:22.659 14:59:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:22.659 14:59:53 -- common/autotest_common.sh@662 -- # es=116 00:09:22.659 14:59:53 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:22.659 14:59:53 -- common/autotest_common.sh@670 -- # es=1 00:09:22.659 14:59:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:22.659 00:09:22.659 real 0m0.420s 00:09:22.659 user 0m0.205s 00:09:22.659 sys 0m0.109s 00:09:22.659 14:59:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:22.659 ************************************ 00:09:22.659 14:59:53 -- common/autotest_common.sh@10 -- # set +x 00:09:22.659 END TEST dd_smaller_blocksize 00:09:22.659 ************************************ 00:09:22.659 14:59:53 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:09:22.659 14:59:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:22.659 14:59:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.659 14:59:53 -- common/autotest_common.sh@10 -- # set +x 00:09:22.659 ************************************ 00:09:22.659 START TEST dd_invalid_count 00:09:22.659 ************************************ 00:09:22.659 14:59:53 -- common/autotest_common.sh@1114 -- # invalid_count 00:09:22.659 14:59:53 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:22.659 14:59:53 -- common/autotest_common.sh@650 -- # local es=0 00:09:22.659 14:59:53 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:22.659 14:59:53 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.659 14:59:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.659 14:59:53 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.659 14:59:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.659 14:59:53 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.659 14:59:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.659 14:59:53 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.659 14:59:53 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:22.659 14:59:53 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:22.659 [2024-11-20 14:59:53.313226] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:09:22.659 14:59:53 -- common/autotest_common.sh@653 -- # es=22 00:09:22.659 14:59:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:22.659 14:59:53 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:22.659 14:59:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:22.659 00:09:22.659 real 0m0.057s 00:09:22.659 user 0m0.034s 00:09:22.659 sys 0m0.022s 00:09:22.659 14:59:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:22.659 14:59:53 -- common/autotest_common.sh@10 -- # set +x 00:09:22.659 ************************************ 00:09:22.659 END TEST dd_invalid_count 00:09:22.659 ************************************ 00:09:22.659 14:59:53 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:09:22.659 14:59:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:22.659 14:59:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.659 14:59:53 -- common/autotest_common.sh@10 -- # set +x 00:09:22.659 ************************************ 00:09:22.659 START TEST dd_invalid_oflag 00:09:22.659 ************************************ 00:09:22.659 14:59:53 -- common/autotest_common.sh@1114 -- # invalid_oflag 00:09:22.659 14:59:53 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:22.659 14:59:53 -- common/autotest_common.sh@650 -- # local es=0 00:09:22.659 14:59:53 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:22.659 14:59:53 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.659 14:59:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.659 14:59:53 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.659 14:59:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.659 14:59:53 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.659 14:59:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.659 14:59:53 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.659 14:59:53 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:22.659 14:59:53 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:22.659 [2024-11-20 14:59:53.412167] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:09:22.659 14:59:53 -- common/autotest_common.sh@653 -- # es=22 00:09:22.659 14:59:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:22.659 14:59:53 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:22.659 14:59:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:22.659 00:09:22.659 real 0m0.061s 00:09:22.659 user 0m0.041s 00:09:22.659 sys 0m0.019s 00:09:22.659 14:59:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:22.659 14:59:53 -- common/autotest_common.sh@10 -- # set +x 00:09:22.659 ************************************ 00:09:22.659 END TEST dd_invalid_oflag 00:09:22.659 ************************************ 00:09:22.919 14:59:53 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:09:22.919 14:59:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:22.919 14:59:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.919 14:59:53 -- common/autotest_common.sh@10 -- # set +x 00:09:22.919 ************************************ 00:09:22.919 START TEST dd_invalid_iflag 00:09:22.919 ************************************ 00:09:22.919 14:59:53 -- common/autotest_common.sh@1114 -- # invalid_iflag 00:09:22.919 14:59:53 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:22.919 14:59:53 -- common/autotest_common.sh@650 -- # local es=0 00:09:22.919 14:59:53 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:22.919 14:59:53 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.919 14:59:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.919 14:59:53 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.919 14:59:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.919 14:59:53 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.919 14:59:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.919 14:59:53 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.919 14:59:53 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:22.919 14:59:53 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:09:22.919 [2024-11-20 14:59:53.528698] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:09:22.919 14:59:53 -- common/autotest_common.sh@653 -- # es=22 00:09:22.919 14:59:53 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:22.919 14:59:53 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:22.919 14:59:53 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:22.919 00:09:22.919 real 0m0.083s 00:09:22.919 user 0m0.053s 00:09:22.919 sys 0m0.028s 00:09:22.919 14:59:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:22.919 ************************************ 00:09:22.919 END TEST dd_invalid_iflag 00:09:22.919 ************************************ 00:09:22.919 14:59:53 -- common/autotest_common.sh@10 -- # set +x 00:09:22.919 14:59:53 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:09:22.919 14:59:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:22.919 14:59:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.919 14:59:53 -- common/autotest_common.sh@10 -- # set +x 00:09:22.919 ************************************ 00:09:22.919 START TEST dd_unknown_flag 00:09:22.919 ************************************ 00:09:22.919 14:59:53 -- common/autotest_common.sh@1114 -- # unknown_flag 00:09:22.919 14:59:53 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:22.919 14:59:53 -- common/autotest_common.sh@650 -- # local es=0 00:09:22.919 14:59:53 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:22.919 14:59:53 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.919 14:59:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.919 14:59:53 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.919 14:59:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.919 14:59:53 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.919 14:59:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:22.919 14:59:53 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:22.919 14:59:53 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:22.919 14:59:53 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:09:22.919 [2024-11-20 14:59:53.659879] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:22.919 [2024-11-20 14:59:53.660036] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71587 ] 00:09:23.179 [2024-11-20 14:59:53.798880] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.179 [2024-11-20 14:59:53.841168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.179 [2024-11-20 14:59:53.897625] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:09:23.179 [2024-11-20 14:59:53.897709] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:09:23.179 [2024-11-20 14:59:53.897722] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:09:23.179 [2024-11-20 14:59:53.897734] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:23.179 [2024-11-20 14:59:53.960854] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:09:23.439 14:59:54 -- common/autotest_common.sh@653 -- # es=236 00:09:23.439 14:59:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:23.439 14:59:54 -- common/autotest_common.sh@662 -- # es=108 00:09:23.439 14:59:54 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:23.439 14:59:54 -- common/autotest_common.sh@670 -- # es=1 00:09:23.439 14:59:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:23.439 00:09:23.439 real 0m0.440s 00:09:23.439 user 0m0.208s 00:09:23.439 sys 0m0.125s 00:09:23.439 14:59:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:23.439 14:59:54 -- common/autotest_common.sh@10 -- # set +x 00:09:23.439 ************************************ 00:09:23.439 END TEST dd_unknown_flag 00:09:23.439 ************************************ 00:09:23.439 14:59:54 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:09:23.439 14:59:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:23.439 14:59:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:23.439 14:59:54 -- common/autotest_common.sh@10 -- # set +x 00:09:23.439 ************************************ 00:09:23.439 START TEST dd_invalid_json 00:09:23.439 ************************************ 00:09:23.439 14:59:54 -- common/autotest_common.sh@1114 -- # invalid_json 00:09:23.439 14:59:54 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:23.439 14:59:54 -- dd/negative_dd.sh@95 -- # : 00:09:23.439 14:59:54 -- common/autotest_common.sh@650 -- # local es=0 00:09:23.439 14:59:54 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:23.439 14:59:54 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.439 14:59:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.439 14:59:54 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.439 14:59:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.439 14:59:54 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.439 14:59:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.439 14:59:54 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:23.439 14:59:54 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:23.439 14:59:54 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:09:23.439 [2024-11-20 14:59:54.122521] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:23.439 [2024-11-20 14:59:54.122623] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71609 ] 00:09:23.698 [2024-11-20 14:59:54.254003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.698 [2024-11-20 14:59:54.299768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.698 [2024-11-20 14:59:54.299957] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:09:23.698 [2024-11-20 14:59:54.299987] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:23.698 [2024-11-20 14:59:54.300047] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:09:23.698 14:59:54 -- common/autotest_common.sh@653 -- # es=234 00:09:23.698 14:59:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:23.698 14:59:54 -- common/autotest_common.sh@662 -- # es=106 00:09:23.698 14:59:54 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:23.698 14:59:54 -- common/autotest_common.sh@670 -- # es=1 00:09:23.698 14:59:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:23.698 00:09:23.698 real 0m0.302s 00:09:23.698 user 0m0.139s 00:09:23.698 sys 0m0.060s 00:09:23.698 14:59:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:23.698 14:59:54 -- common/autotest_common.sh@10 -- # set +x 00:09:23.698 ************************************ 00:09:23.698 END TEST dd_invalid_json 00:09:23.698 ************************************ 00:09:23.698 ************************************ 00:09:23.698 END TEST spdk_dd_negative 00:09:23.698 ************************************ 00:09:23.698 00:09:23.698 real 0m2.512s 00:09:23.698 user 0m1.234s 00:09:23.698 sys 0m0.915s 00:09:23.698 14:59:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:23.698 14:59:54 -- common/autotest_common.sh@10 -- # set +x 00:09:23.698 ************************************ 00:09:23.698 END TEST spdk_dd 00:09:23.698 ************************************ 00:09:23.698 00:09:23.698 real 1m6.395s 00:09:23.698 user 0m40.789s 00:09:23.699 sys 0m16.293s 00:09:23.699 14:59:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:23.699 14:59:54 -- common/autotest_common.sh@10 -- # set +x 00:09:23.699 14:59:54 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:09:23.699 14:59:54 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:09:23.699 14:59:54 -- spdk/autotest.sh@255 -- # timing_exit lib 00:09:23.699 14:59:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:23.699 14:59:54 -- common/autotest_common.sh@10 -- # set +x 00:09:23.958 14:59:54 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:09:23.958 14:59:54 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:09:23.958 14:59:54 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:09:23.958 14:59:54 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:09:23.958 14:59:54 -- spdk/autotest.sh@278 -- # '[' tcp = rdma ']' 00:09:23.958 14:59:54 -- spdk/autotest.sh@281 -- # '[' tcp = tcp ']' 00:09:23.958 14:59:54 -- spdk/autotest.sh@282 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:23.958 14:59:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:23.958 14:59:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:23.958 14:59:54 -- common/autotest_common.sh@10 -- # set +x 00:09:23.958 ************************************ 00:09:23.958 START TEST nvmf_tcp 00:09:23.959 ************************************ 00:09:23.959 14:59:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:23.959 * Looking for test storage... 00:09:23.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:23.959 14:59:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:23.959 14:59:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:23.959 14:59:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:23.959 14:59:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:23.959 14:59:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:23.959 14:59:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:23.959 14:59:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:23.959 14:59:54 -- scripts/common.sh@335 -- # IFS=.-: 00:09:23.959 14:59:54 -- scripts/common.sh@335 -- # read -ra ver1 00:09:23.959 14:59:54 -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.959 14:59:54 -- scripts/common.sh@336 -- # read -ra ver2 00:09:23.959 14:59:54 -- scripts/common.sh@337 -- # local 'op=<' 00:09:23.959 14:59:54 -- scripts/common.sh@339 -- # ver1_l=2 00:09:23.959 14:59:54 -- scripts/common.sh@340 -- # ver2_l=1 00:09:23.959 14:59:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:23.959 14:59:54 -- scripts/common.sh@343 -- # case "$op" in 00:09:23.959 14:59:54 -- scripts/common.sh@344 -- # : 1 00:09:23.959 14:59:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:23.959 14:59:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.959 14:59:54 -- scripts/common.sh@364 -- # decimal 1 00:09:23.959 14:59:54 -- scripts/common.sh@352 -- # local d=1 00:09:23.959 14:59:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.959 14:59:54 -- scripts/common.sh@354 -- # echo 1 00:09:23.959 14:59:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:23.959 14:59:54 -- scripts/common.sh@365 -- # decimal 2 00:09:23.959 14:59:54 -- scripts/common.sh@352 -- # local d=2 00:09:23.959 14:59:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.959 14:59:54 -- scripts/common.sh@354 -- # echo 2 00:09:23.959 14:59:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:23.959 14:59:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:23.959 14:59:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:23.959 14:59:54 -- scripts/common.sh@367 -- # return 0 00:09:23.959 14:59:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.959 14:59:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:23.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.959 --rc genhtml_branch_coverage=1 00:09:23.959 --rc genhtml_function_coverage=1 00:09:23.959 --rc genhtml_legend=1 00:09:23.959 --rc geninfo_all_blocks=1 00:09:23.959 --rc geninfo_unexecuted_blocks=1 00:09:23.959 00:09:23.959 ' 00:09:23.959 14:59:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:23.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.959 --rc genhtml_branch_coverage=1 00:09:23.959 --rc genhtml_function_coverage=1 00:09:23.959 --rc genhtml_legend=1 00:09:23.959 --rc geninfo_all_blocks=1 00:09:23.959 --rc geninfo_unexecuted_blocks=1 00:09:23.959 00:09:23.959 ' 00:09:23.959 14:59:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:23.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.959 --rc genhtml_branch_coverage=1 00:09:23.959 --rc genhtml_function_coverage=1 00:09:23.959 --rc genhtml_legend=1 00:09:23.959 --rc geninfo_all_blocks=1 00:09:23.959 --rc geninfo_unexecuted_blocks=1 00:09:23.959 00:09:23.959 ' 00:09:23.959 14:59:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:23.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.959 --rc genhtml_branch_coverage=1 00:09:23.959 --rc genhtml_function_coverage=1 00:09:23.959 --rc genhtml_legend=1 00:09:23.959 --rc geninfo_all_blocks=1 00:09:23.959 --rc geninfo_unexecuted_blocks=1 00:09:23.959 00:09:23.959 ' 00:09:23.959 14:59:54 -- nvmf/nvmf.sh@10 -- # uname -s 00:09:23.959 14:59:54 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:23.959 14:59:54 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:23.959 14:59:54 -- nvmf/common.sh@7 -- # uname -s 00:09:23.959 14:59:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:23.959 14:59:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:23.959 14:59:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:23.959 14:59:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:23.959 14:59:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:23.959 14:59:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:23.959 14:59:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:23.959 14:59:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:23.959 14:59:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:23.959 14:59:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:23.959 14:59:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:09:23.959 14:59:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:09:23.959 14:59:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:23.959 14:59:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:23.959 14:59:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:23.959 14:59:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:23.959 14:59:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.959 14:59:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.959 14:59:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.959 14:59:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.959 14:59:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.959 14:59:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.959 14:59:54 -- paths/export.sh@5 -- # export PATH 00:09:23.959 14:59:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.959 14:59:54 -- nvmf/common.sh@46 -- # : 0 00:09:23.959 14:59:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:23.959 14:59:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:23.959 14:59:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:23.959 14:59:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:23.959 14:59:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:23.959 14:59:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:23.959 14:59:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:23.959 14:59:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:23.959 14:59:54 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:23.959 14:59:54 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:09:23.959 14:59:54 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:09:23.959 14:59:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:23.959 14:59:54 -- common/autotest_common.sh@10 -- # set +x 00:09:23.959 14:59:54 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:09:23.959 14:59:54 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:23.959 14:59:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:23.959 14:59:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:23.959 14:59:54 -- common/autotest_common.sh@10 -- # set +x 00:09:23.959 ************************************ 00:09:23.959 START TEST nvmf_host_management 00:09:23.959 ************************************ 00:09:23.959 14:59:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:24.219 * Looking for test storage... 00:09:24.219 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:24.219 14:59:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:24.219 14:59:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:24.219 14:59:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:24.219 14:59:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:24.219 14:59:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:24.219 14:59:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:24.219 14:59:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:24.219 14:59:54 -- scripts/common.sh@335 -- # IFS=.-: 00:09:24.219 14:59:54 -- scripts/common.sh@335 -- # read -ra ver1 00:09:24.219 14:59:54 -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.219 14:59:54 -- scripts/common.sh@336 -- # read -ra ver2 00:09:24.219 14:59:54 -- scripts/common.sh@337 -- # local 'op=<' 00:09:24.219 14:59:54 -- scripts/common.sh@339 -- # ver1_l=2 00:09:24.219 14:59:54 -- scripts/common.sh@340 -- # ver2_l=1 00:09:24.219 14:59:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:24.219 14:59:54 -- scripts/common.sh@343 -- # case "$op" in 00:09:24.219 14:59:54 -- scripts/common.sh@344 -- # : 1 00:09:24.219 14:59:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:24.219 14:59:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.219 14:59:54 -- scripts/common.sh@364 -- # decimal 1 00:09:24.219 14:59:54 -- scripts/common.sh@352 -- # local d=1 00:09:24.219 14:59:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.219 14:59:54 -- scripts/common.sh@354 -- # echo 1 00:09:24.219 14:59:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:24.219 14:59:54 -- scripts/common.sh@365 -- # decimal 2 00:09:24.219 14:59:54 -- scripts/common.sh@352 -- # local d=2 00:09:24.219 14:59:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.219 14:59:54 -- scripts/common.sh@354 -- # echo 2 00:09:24.219 14:59:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:24.219 14:59:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:24.219 14:59:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:24.219 14:59:54 -- scripts/common.sh@367 -- # return 0 00:09:24.219 14:59:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.219 14:59:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:24.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.219 --rc genhtml_branch_coverage=1 00:09:24.219 --rc genhtml_function_coverage=1 00:09:24.219 --rc genhtml_legend=1 00:09:24.219 --rc geninfo_all_blocks=1 00:09:24.219 --rc geninfo_unexecuted_blocks=1 00:09:24.219 00:09:24.219 ' 00:09:24.219 14:59:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:24.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.219 --rc genhtml_branch_coverage=1 00:09:24.219 --rc genhtml_function_coverage=1 00:09:24.219 --rc genhtml_legend=1 00:09:24.219 --rc geninfo_all_blocks=1 00:09:24.219 --rc geninfo_unexecuted_blocks=1 00:09:24.219 00:09:24.219 ' 00:09:24.219 14:59:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:24.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.219 --rc genhtml_branch_coverage=1 00:09:24.219 --rc genhtml_function_coverage=1 00:09:24.219 --rc genhtml_legend=1 00:09:24.219 --rc geninfo_all_blocks=1 00:09:24.219 --rc geninfo_unexecuted_blocks=1 00:09:24.219 00:09:24.219 ' 00:09:24.219 14:59:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:24.219 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.219 --rc genhtml_branch_coverage=1 00:09:24.219 --rc genhtml_function_coverage=1 00:09:24.219 --rc genhtml_legend=1 00:09:24.219 --rc geninfo_all_blocks=1 00:09:24.219 --rc geninfo_unexecuted_blocks=1 00:09:24.219 00:09:24.219 ' 00:09:24.219 14:59:54 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:24.219 14:59:54 -- nvmf/common.sh@7 -- # uname -s 00:09:24.219 14:59:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.219 14:59:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.219 14:59:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.219 14:59:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.219 14:59:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.219 14:59:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.219 14:59:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.219 14:59:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.219 14:59:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.219 14:59:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.219 14:59:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:09:24.219 14:59:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:09:24.219 14:59:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.219 14:59:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.219 14:59:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:24.219 14:59:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:24.219 14:59:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.219 14:59:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.219 14:59:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.220 14:59:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.220 14:59:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.220 14:59:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.220 14:59:54 -- paths/export.sh@5 -- # export PATH 00:09:24.220 14:59:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.220 14:59:54 -- nvmf/common.sh@46 -- # : 0 00:09:24.220 14:59:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:24.220 14:59:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:24.220 14:59:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:24.220 14:59:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.220 14:59:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.220 14:59:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:24.220 14:59:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:24.220 14:59:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:24.220 14:59:54 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:24.220 14:59:54 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:24.220 14:59:54 -- target/host_management.sh@104 -- # nvmftestinit 00:09:24.220 14:59:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:24.220 14:59:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.220 14:59:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:24.220 14:59:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:24.220 14:59:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:24.220 14:59:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.220 14:59:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.220 14:59:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.220 14:59:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:24.220 14:59:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:24.220 14:59:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:24.220 14:59:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:24.220 14:59:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:24.220 14:59:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:24.220 14:59:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.220 14:59:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.220 14:59:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:24.220 14:59:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:24.220 14:59:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:24.220 14:59:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:24.220 14:59:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:24.220 14:59:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.220 14:59:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:24.220 14:59:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:24.220 14:59:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:24.220 14:59:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:24.220 14:59:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:24.220 Cannot find device "nvmf_init_br" 00:09:24.220 14:59:54 -- nvmf/common.sh@153 -- # true 00:09:24.220 14:59:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:24.220 Cannot find device "nvmf_tgt_br" 00:09:24.220 14:59:55 -- nvmf/common.sh@154 -- # true 00:09:24.220 14:59:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:24.220 Cannot find device "nvmf_tgt_br2" 00:09:24.220 14:59:55 -- nvmf/common.sh@155 -- # true 00:09:24.220 14:59:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:24.479 Cannot find device "nvmf_init_br" 00:09:24.479 14:59:55 -- nvmf/common.sh@156 -- # true 00:09:24.479 14:59:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:24.479 Cannot find device "nvmf_tgt_br" 00:09:24.479 14:59:55 -- nvmf/common.sh@157 -- # true 00:09:24.479 14:59:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:24.479 Cannot find device "nvmf_tgt_br2" 00:09:24.479 14:59:55 -- nvmf/common.sh@158 -- # true 00:09:24.479 14:59:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:24.479 Cannot find device "nvmf_br" 00:09:24.479 14:59:55 -- nvmf/common.sh@159 -- # true 00:09:24.479 14:59:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:24.479 Cannot find device "nvmf_init_if" 00:09:24.479 14:59:55 -- nvmf/common.sh@160 -- # true 00:09:24.479 14:59:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:24.479 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:24.479 14:59:55 -- nvmf/common.sh@161 -- # true 00:09:24.479 14:59:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:24.479 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:24.479 14:59:55 -- nvmf/common.sh@162 -- # true 00:09:24.479 14:59:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:24.479 14:59:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:24.479 14:59:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:24.479 14:59:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:24.479 14:59:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:24.479 14:59:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:24.479 14:59:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:24.479 14:59:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:24.479 14:59:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:24.479 14:59:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:24.479 14:59:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:24.479 14:59:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:24.479 14:59:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:24.479 14:59:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:24.479 14:59:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:24.479 14:59:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:24.479 14:59:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:24.738 14:59:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:24.738 14:59:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:24.738 14:59:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:24.738 14:59:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:24.738 14:59:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:24.738 14:59:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:24.738 14:59:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:24.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:09:24.738 00:09:24.738 --- 10.0.0.2 ping statistics --- 00:09:24.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.738 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:09:24.738 14:59:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:24.738 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:24.738 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:09:24.738 00:09:24.738 --- 10.0.0.3 ping statistics --- 00:09:24.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.738 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:24.738 14:59:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:24.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:09:24.738 00:09:24.738 --- 10.0.0.1 ping statistics --- 00:09:24.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.738 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:24.738 14:59:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.738 14:59:55 -- nvmf/common.sh@421 -- # return 0 00:09:24.738 14:59:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:24.738 14:59:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.738 14:59:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:24.738 14:59:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:24.738 14:59:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.738 14:59:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:24.738 14:59:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:24.738 14:59:55 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:09:24.738 14:59:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:24.738 14:59:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:24.739 14:59:55 -- common/autotest_common.sh@10 -- # set +x 00:09:24.739 ************************************ 00:09:24.739 START TEST nvmf_host_management 00:09:24.739 ************************************ 00:09:24.739 14:59:55 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:09:24.739 14:59:55 -- target/host_management.sh@69 -- # starttarget 00:09:24.739 14:59:55 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:24.739 14:59:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:24.739 14:59:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:24.739 14:59:55 -- common/autotest_common.sh@10 -- # set +x 00:09:24.739 14:59:55 -- nvmf/common.sh@469 -- # nvmfpid=71896 00:09:24.739 14:59:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:24.739 14:59:55 -- nvmf/common.sh@470 -- # waitforlisten 71896 00:09:24.739 14:59:55 -- common/autotest_common.sh@829 -- # '[' -z 71896 ']' 00:09:24.739 14:59:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.739 14:59:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:24.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.739 14:59:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.739 14:59:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:24.739 14:59:55 -- common/autotest_common.sh@10 -- # set +x 00:09:24.739 [2024-11-20 14:59:55.519629] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:24.739 [2024-11-20 14:59:55.519803] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.997 [2024-11-20 14:59:55.664318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:24.997 [2024-11-20 14:59:55.711020] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:24.997 [2024-11-20 14:59:55.711457] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.997 [2024-11-20 14:59:55.711666] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.997 [2024-11-20 14:59:55.711880] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.997 [2024-11-20 14:59:55.712267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.997 [2024-11-20 14:59:55.712392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.997 [2024-11-20 14:59:55.712495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:24.997 [2024-11-20 14:59:55.712507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.933 14:59:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:25.933 14:59:56 -- common/autotest_common.sh@862 -- # return 0 00:09:25.933 14:59:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:26.192 14:59:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:26.192 14:59:56 -- common/autotest_common.sh@10 -- # set +x 00:09:26.192 14:59:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.192 14:59:56 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:26.192 14:59:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.192 14:59:56 -- common/autotest_common.sh@10 -- # set +x 00:09:26.192 [2024-11-20 14:59:56.818695] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.192 14:59:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.192 14:59:56 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:26.192 14:59:56 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:26.192 14:59:56 -- common/autotest_common.sh@10 -- # set +x 00:09:26.192 14:59:56 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:26.192 14:59:56 -- target/host_management.sh@23 -- # cat 00:09:26.192 14:59:56 -- target/host_management.sh@30 -- # rpc_cmd 00:09:26.192 14:59:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.192 14:59:56 -- common/autotest_common.sh@10 -- # set +x 00:09:26.192 Malloc0 00:09:26.192 [2024-11-20 14:59:56.893867] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.192 14:59:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.192 14:59:56 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:26.192 14:59:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:26.192 14:59:56 -- common/autotest_common.sh@10 -- # set +x 00:09:26.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:26.192 14:59:56 -- target/host_management.sh@73 -- # perfpid=71955 00:09:26.192 14:59:56 -- target/host_management.sh@74 -- # waitforlisten 71955 /var/tmp/bdevperf.sock 00:09:26.192 14:59:56 -- common/autotest_common.sh@829 -- # '[' -z 71955 ']' 00:09:26.192 14:59:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:26.192 14:59:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:26.192 14:59:56 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:26.192 14:59:56 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:26.192 14:59:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:26.192 14:59:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:26.192 14:59:56 -- nvmf/common.sh@520 -- # config=() 00:09:26.192 14:59:56 -- common/autotest_common.sh@10 -- # set +x 00:09:26.192 14:59:56 -- nvmf/common.sh@520 -- # local subsystem config 00:09:26.192 14:59:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:26.192 14:59:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:26.192 { 00:09:26.192 "params": { 00:09:26.192 "name": "Nvme$subsystem", 00:09:26.192 "trtype": "$TEST_TRANSPORT", 00:09:26.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:26.192 "adrfam": "ipv4", 00:09:26.192 "trsvcid": "$NVMF_PORT", 00:09:26.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:26.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:26.192 "hdgst": ${hdgst:-false}, 00:09:26.192 "ddgst": ${ddgst:-false} 00:09:26.192 }, 00:09:26.192 "method": "bdev_nvme_attach_controller" 00:09:26.192 } 00:09:26.192 EOF 00:09:26.192 )") 00:09:26.192 14:59:56 -- nvmf/common.sh@542 -- # cat 00:09:26.192 14:59:56 -- nvmf/common.sh@544 -- # jq . 00:09:26.192 14:59:56 -- nvmf/common.sh@545 -- # IFS=, 00:09:26.192 14:59:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:26.192 "params": { 00:09:26.192 "name": "Nvme0", 00:09:26.192 "trtype": "tcp", 00:09:26.192 "traddr": "10.0.0.2", 00:09:26.192 "adrfam": "ipv4", 00:09:26.192 "trsvcid": "4420", 00:09:26.192 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:26.192 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:26.192 "hdgst": false, 00:09:26.192 "ddgst": false 00:09:26.192 }, 00:09:26.192 "method": "bdev_nvme_attach_controller" 00:09:26.192 }' 00:09:26.451 [2024-11-20 14:59:57.001250] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:26.451 [2024-11-20 14:59:57.001374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71955 ] 00:09:26.451 [2024-11-20 14:59:57.141097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.451 [2024-11-20 14:59:57.187817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.713 Running I/O for 10 seconds... 00:09:26.713 14:59:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:26.713 14:59:57 -- common/autotest_common.sh@862 -- # return 0 00:09:26.713 14:59:57 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:26.714 14:59:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.714 14:59:57 -- common/autotest_common.sh@10 -- # set +x 00:09:26.714 14:59:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.714 14:59:57 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:26.714 14:59:57 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:26.714 14:59:57 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:26.714 14:59:57 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:26.714 14:59:57 -- target/host_management.sh@52 -- # local ret=1 00:09:26.714 14:59:57 -- target/host_management.sh@53 -- # local i 00:09:26.714 14:59:57 -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:26.714 14:59:57 -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:26.714 14:59:57 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:26.714 14:59:57 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:26.714 14:59:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.714 14:59:57 -- common/autotest_common.sh@10 -- # set +x 00:09:26.714 14:59:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.714 14:59:57 -- target/host_management.sh@55 -- # read_io_count=129 00:09:26.714 14:59:57 -- target/host_management.sh@58 -- # '[' 129 -ge 100 ']' 00:09:26.714 14:59:57 -- target/host_management.sh@59 -- # ret=0 00:09:26.714 14:59:57 -- target/host_management.sh@60 -- # break 00:09:26.714 14:59:57 -- target/host_management.sh@64 -- # return 0 00:09:26.714 14:59:57 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:26.714 14:59:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.714 14:59:57 -- common/autotest_common.sh@10 -- # set +x 00:09:26.714 14:59:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.714 14:59:57 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:26.714 14:59:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.714 14:59:57 -- common/autotest_common.sh@10 -- # set +x 00:09:26.714 [2024-11-20 14:59:57.470764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.714 [2024-11-20 14:59:57.471356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.714 [2024-11-20 14:59:57.471538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.714 [2024-11-20 14:59:57.471572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.714 [2024-11-20 14:59:57.471597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.714 [2024-11-20 14:59:57.471615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.714 [2024-11-20 14:59:57.471635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.714 [2024-11-20 14:59:57.471681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.714 [2024-11-20 14:59:57.471701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.714 [2024-11-20 14:59:57.471716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.714 [2024-11-20 14:59:57.471734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.714 [2024-11-20 14:59:57.471748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.714 [2024-11-20 14:59:57.471767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.714 [2024-11-20 14:59:57.471784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.714 [2024-11-20 14:59:57.471805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.714 [2024-11-20 14:59:57.471822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.714 [2024-11-20 14:59:57.471844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.714 [2024-11-20 14:59:57.471860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.714 [2024-11-20 14:59:57.471880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.714 [2024-11-20 14:59:57.471898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.714 [2024-11-20 14:59:57.471918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.714 [2024-11-20 14:59:57.471935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.714 [2024-11-20 14:59:57.471955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.714 [2024-11-20 14:59:57.471971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.714 [2024-11-20 14:59:57.471991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.714 [2024-11-20 14:59:57.472008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.714 [2024-11-20 14:59:57.472028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.714 [2024-11-20 14:59:57.472054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.714 [2024-11-20 14:59:57.472077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.714 [2024-11-20 14:59:57.472094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.714 [2024-11-20 14:59:57.472113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.714 [2024-11-20 14:59:57.472130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.714 [2024-11-20 14:59:57.472150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.714 [2024-11-20 14:59:57.472166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.714 [2024-11-20 14:59:57.472187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.714 [2024-11-20 14:59:57.472204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.714 [2024-11-20 14:59:57.472224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.714 [2024-11-20 14:59:57.472256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.714 [2024-11-20 14:59:57.472276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.714 [2024-11-20 14:59:57.472291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.714 [2024-11-20 14:59:57.472309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.714 [2024-11-20 14:59:57.472326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.472347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.472374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.472395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.472413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.472433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.472451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.472471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.472488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.472505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.472521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.472542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.472559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.472579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.472596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.472615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.472632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.472670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.472689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.472710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.472727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.472747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.472764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.472785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.472801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.472821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.472839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.472873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.472890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.472910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.472927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.472947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.472964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.472984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.473001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.473021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.473037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.473056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.473075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.473095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.473112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.473132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.473149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.473168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.473185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.473205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.473223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.473243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.473260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.473281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.473301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.473322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.473341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.473361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.473378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.473398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.473415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.473436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.715 [2024-11-20 14:59:57.473454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.715 [2024-11-20 14:59:57.473474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.716 [2024-11-20 14:59:57.473491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.716 [2024-11-20 14:59:57.473511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.716 [2024-11-20 14:59:57.473528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.716 [2024-11-20 14:59:57.473547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.716 [2024-11-20 14:59:57.473564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.716 [2024-11-20 14:59:57.473583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.716 [2024-11-20 14:59:57.473600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.716 [2024-11-20 14:59:57.473621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.716 [2024-11-20 14:59:57.473654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.716 [2024-11-20 14:59:57.473678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.716 [2024-11-20 14:59:57.473695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.716 [2024-11-20 14:59:57.473712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.716 [2024-11-20 14:59:57.473727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.716 [2024-11-20 14:59:57.473745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.716 [2024-11-20 14:59:57.473760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.716 [2024-11-20 14:59:57.473780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.716 [2024-11-20 14:59:57.473799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.716 [2024-11-20 14:59:57.473820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.716 [2024-11-20 14:59:57.473837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.716 [2024-11-20 14:59:57.473857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.716 [2024-11-20 14:59:57.473873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.716 [2024-11-20 14:59:57.473894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.716 [2024-11-20 14:59:57.473913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.716 [2024-11-20 14:59:57.473935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.716 [2024-11-20 14:59:57.473951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.716 [2024-11-20 14:59:57.473972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:26.716 [2024-11-20 14:59:57.473989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.716 [2024-11-20 14:59:57.474007] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21bd120 is same with the state(5) to be set 00:09:26.716 [2024-11-20 14:59:57.474093] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21bd120 was disconnected and freed. reset controller. 00:09:26.716 [2024-11-20 14:59:57.474331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:26.716 [2024-11-20 14:59:57.474372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.716 [2024-11-20 14:59:57.474393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:26.716 [2024-11-20 14:59:57.474410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.716 [2024-11-20 14:59:57.474427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:26.716 [2024-11-20 14:59:57.474444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.716 [2024-11-20 14:59:57.474462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:26.716 [2024-11-20 14:59:57.474477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.716 [2024-11-20 14:59:57.474492] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21bf6a0 is same with the state(5) to be set 00:09:26.716 [2024-11-20 14:59:57.476322] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controlle 14:59:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.716 r 00:09:26.716 14:59:57 -- target/host_management.sh@87 -- # sleep 1 00:09:26.716 task offset: 34688 on job bdev=Nvme0n1 fails 00:09:26.716 00:09:26.716 Latency(us) 00:09:26.716 [2024-11-20T14:59:57.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:26.716 [2024-11-20T14:59:57.520Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:26.716 [2024-11-20T14:59:57.520Z] Job: Nvme0n1 ended in about 0.14 seconds with error 00:09:26.716 Verification LBA range: start 0x0 length 0x400 00:09:26.716 Nvme0n1 : 0.14 1742.22 108.89 444.23 0.00 27958.20 4617.31 36223.53 00:09:26.716 [2024-11-20T14:59:57.520Z] =================================================================================================================== 00:09:26.716 [2024-11-20T14:59:57.520Z] Total : 1742.22 108.89 444.23 0.00 27958.20 4617.31 36223.53 00:09:26.716 [2024-11-20 14:59:57.479545] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:26.716 [2024-11-20 14:59:57.479608] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21bf6a0 (9): Bad file descriptor 00:09:26.716 [2024-11-20 14:59:57.487014] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:28.091 14:59:58 -- target/host_management.sh@91 -- # kill -9 71955 00:09:28.091 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (71955) - No such process 00:09:28.091 14:59:58 -- target/host_management.sh@91 -- # true 00:09:28.091 14:59:58 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:28.091 14:59:58 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:28.091 14:59:58 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:28.091 14:59:58 -- nvmf/common.sh@520 -- # config=() 00:09:28.091 14:59:58 -- nvmf/common.sh@520 -- # local subsystem config 00:09:28.091 14:59:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:28.091 14:59:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:28.091 { 00:09:28.091 "params": { 00:09:28.091 "name": "Nvme$subsystem", 00:09:28.091 "trtype": "$TEST_TRANSPORT", 00:09:28.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:28.091 "adrfam": "ipv4", 00:09:28.091 "trsvcid": "$NVMF_PORT", 00:09:28.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:28.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:28.091 "hdgst": ${hdgst:-false}, 00:09:28.091 "ddgst": ${ddgst:-false} 00:09:28.091 }, 00:09:28.091 "method": "bdev_nvme_attach_controller" 00:09:28.091 } 00:09:28.091 EOF 00:09:28.091 )") 00:09:28.091 14:59:58 -- nvmf/common.sh@542 -- # cat 00:09:28.091 14:59:58 -- nvmf/common.sh@544 -- # jq . 00:09:28.091 14:59:58 -- nvmf/common.sh@545 -- # IFS=, 00:09:28.091 14:59:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:28.091 "params": { 00:09:28.091 "name": "Nvme0", 00:09:28.091 "trtype": "tcp", 00:09:28.092 "traddr": "10.0.0.2", 00:09:28.092 "adrfam": "ipv4", 00:09:28.092 "trsvcid": "4420", 00:09:28.092 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:28.092 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:28.092 "hdgst": false, 00:09:28.092 "ddgst": false 00:09:28.092 }, 00:09:28.092 "method": "bdev_nvme_attach_controller" 00:09:28.092 }' 00:09:28.092 [2024-11-20 14:59:58.540979] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:28.092 [2024-11-20 14:59:58.541679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71986 ] 00:09:28.092 [2024-11-20 14:59:58.688798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.092 [2024-11-20 14:59:58.730961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.092 Running I/O for 1 seconds... 00:09:29.467 00:09:29.467 Latency(us) 00:09:29.467 [2024-11-20T15:00:00.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.467 [2024-11-20T15:00:00.271Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:29.467 Verification LBA range: start 0x0 length 0x400 00:09:29.467 Nvme0n1 : 1.01 2173.19 135.82 0.00 0.00 28992.11 3247.01 30742.34 00:09:29.467 [2024-11-20T15:00:00.271Z] =================================================================================================================== 00:09:29.467 [2024-11-20T15:00:00.271Z] Total : 2173.19 135.82 0.00 0.00 28992.11 3247.01 30742.34 00:09:29.467 15:00:00 -- target/host_management.sh@101 -- # stoptarget 00:09:29.467 15:00:00 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:29.467 15:00:00 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:09:29.467 15:00:00 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:09:29.467 15:00:00 -- target/host_management.sh@40 -- # nvmftestfini 00:09:29.467 15:00:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:29.467 15:00:00 -- nvmf/common.sh@116 -- # sync 00:09:29.467 15:00:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:29.467 15:00:00 -- nvmf/common.sh@119 -- # set +e 00:09:29.467 15:00:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:29.467 15:00:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:29.467 rmmod nvme_tcp 00:09:29.467 rmmod nvme_fabrics 00:09:29.467 rmmod nvme_keyring 00:09:29.467 15:00:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:29.467 15:00:00 -- nvmf/common.sh@123 -- # set -e 00:09:29.468 15:00:00 -- nvmf/common.sh@124 -- # return 0 00:09:29.468 15:00:00 -- nvmf/common.sh@477 -- # '[' -n 71896 ']' 00:09:29.468 15:00:00 -- nvmf/common.sh@478 -- # killprocess 71896 00:09:29.468 15:00:00 -- common/autotest_common.sh@936 -- # '[' -z 71896 ']' 00:09:29.468 15:00:00 -- common/autotest_common.sh@940 -- # kill -0 71896 00:09:29.468 15:00:00 -- common/autotest_common.sh@941 -- # uname 00:09:29.468 15:00:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:29.468 15:00:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71896 00:09:29.726 killing process with pid 71896 00:09:29.726 15:00:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:29.726 15:00:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:29.726 15:00:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71896' 00:09:29.726 15:00:00 -- common/autotest_common.sh@955 -- # kill 71896 00:09:29.726 15:00:00 -- common/autotest_common.sh@960 -- # wait 71896 00:09:29.726 [2024-11-20 15:00:00.459616] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:29.726 15:00:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:29.726 15:00:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:29.726 15:00:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:29.726 15:00:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:29.726 15:00:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:29.726 15:00:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.726 15:00:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:29.726 15:00:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.726 15:00:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:29.984 00:09:29.984 real 0m5.082s 00:09:29.984 user 0m20.821s 00:09:29.984 sys 0m1.273s 00:09:29.984 ************************************ 00:09:29.984 END TEST nvmf_host_management 00:09:29.984 ************************************ 00:09:29.984 15:00:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:29.984 15:00:00 -- common/autotest_common.sh@10 -- # set +x 00:09:29.984 15:00:00 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:09:29.984 00:09:29.984 real 0m5.830s 00:09:29.984 user 0m21.067s 00:09:29.984 ************************************ 00:09:29.984 END TEST nvmf_host_management 00:09:29.984 ************************************ 00:09:29.984 sys 0m1.534s 00:09:29.984 15:00:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:29.984 15:00:00 -- common/autotest_common.sh@10 -- # set +x 00:09:29.984 15:00:00 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:29.984 15:00:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:29.984 15:00:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:29.984 15:00:00 -- common/autotest_common.sh@10 -- # set +x 00:09:29.984 ************************************ 00:09:29.984 START TEST nvmf_lvol 00:09:29.984 ************************************ 00:09:29.984 15:00:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:29.984 * Looking for test storage... 00:09:29.984 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:29.984 15:00:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:29.984 15:00:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:29.984 15:00:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:30.244 15:00:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:30.244 15:00:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:30.244 15:00:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:30.244 15:00:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:30.244 15:00:00 -- scripts/common.sh@335 -- # IFS=.-: 00:09:30.244 15:00:00 -- scripts/common.sh@335 -- # read -ra ver1 00:09:30.244 15:00:00 -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.244 15:00:00 -- scripts/common.sh@336 -- # read -ra ver2 00:09:30.244 15:00:00 -- scripts/common.sh@337 -- # local 'op=<' 00:09:30.244 15:00:00 -- scripts/common.sh@339 -- # ver1_l=2 00:09:30.244 15:00:00 -- scripts/common.sh@340 -- # ver2_l=1 00:09:30.244 15:00:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:30.244 15:00:00 -- scripts/common.sh@343 -- # case "$op" in 00:09:30.244 15:00:00 -- scripts/common.sh@344 -- # : 1 00:09:30.244 15:00:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:30.244 15:00:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.244 15:00:00 -- scripts/common.sh@364 -- # decimal 1 00:09:30.244 15:00:00 -- scripts/common.sh@352 -- # local d=1 00:09:30.244 15:00:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.244 15:00:00 -- scripts/common.sh@354 -- # echo 1 00:09:30.244 15:00:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:30.244 15:00:00 -- scripts/common.sh@365 -- # decimal 2 00:09:30.244 15:00:00 -- scripts/common.sh@352 -- # local d=2 00:09:30.244 15:00:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.244 15:00:00 -- scripts/common.sh@354 -- # echo 2 00:09:30.244 15:00:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:30.244 15:00:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:30.244 15:00:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:30.244 15:00:00 -- scripts/common.sh@367 -- # return 0 00:09:30.244 15:00:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.244 15:00:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:30.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.244 --rc genhtml_branch_coverage=1 00:09:30.244 --rc genhtml_function_coverage=1 00:09:30.244 --rc genhtml_legend=1 00:09:30.244 --rc geninfo_all_blocks=1 00:09:30.244 --rc geninfo_unexecuted_blocks=1 00:09:30.244 00:09:30.244 ' 00:09:30.244 15:00:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:30.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.244 --rc genhtml_branch_coverage=1 00:09:30.244 --rc genhtml_function_coverage=1 00:09:30.244 --rc genhtml_legend=1 00:09:30.244 --rc geninfo_all_blocks=1 00:09:30.244 --rc geninfo_unexecuted_blocks=1 00:09:30.244 00:09:30.244 ' 00:09:30.244 15:00:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:30.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.244 --rc genhtml_branch_coverage=1 00:09:30.244 --rc genhtml_function_coverage=1 00:09:30.244 --rc genhtml_legend=1 00:09:30.244 --rc geninfo_all_blocks=1 00:09:30.244 --rc geninfo_unexecuted_blocks=1 00:09:30.244 00:09:30.244 ' 00:09:30.244 15:00:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:30.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.244 --rc genhtml_branch_coverage=1 00:09:30.244 --rc genhtml_function_coverage=1 00:09:30.244 --rc genhtml_legend=1 00:09:30.244 --rc geninfo_all_blocks=1 00:09:30.244 --rc geninfo_unexecuted_blocks=1 00:09:30.244 00:09:30.244 ' 00:09:30.244 15:00:00 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:30.244 15:00:00 -- nvmf/common.sh@7 -- # uname -s 00:09:30.244 15:00:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.244 15:00:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.244 15:00:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.244 15:00:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.244 15:00:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.244 15:00:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.244 15:00:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.244 15:00:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.244 15:00:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.244 15:00:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.244 15:00:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:09:30.244 15:00:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:09:30.244 15:00:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.244 15:00:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.244 15:00:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:30.244 15:00:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:30.244 15:00:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.244 15:00:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.244 15:00:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.244 15:00:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.244 15:00:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.244 15:00:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.244 15:00:00 -- paths/export.sh@5 -- # export PATH 00:09:30.244 15:00:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.244 15:00:00 -- nvmf/common.sh@46 -- # : 0 00:09:30.244 15:00:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:30.244 15:00:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:30.244 15:00:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:30.244 15:00:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.244 15:00:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.244 15:00:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:30.244 15:00:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:30.244 15:00:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:30.244 15:00:00 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:30.244 15:00:00 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:30.244 15:00:00 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:30.244 15:00:00 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:30.244 15:00:00 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:30.244 15:00:00 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:30.244 15:00:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:30.244 15:00:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.244 15:00:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:30.244 15:00:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:30.244 15:00:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:30.244 15:00:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.244 15:00:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:30.244 15:00:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.244 15:00:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:30.244 15:00:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:30.244 15:00:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:30.244 15:00:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:30.244 15:00:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:30.244 15:00:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:30.244 15:00:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.244 15:00:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:30.244 15:00:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:30.245 15:00:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:30.245 15:00:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:30.245 15:00:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:30.245 15:00:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:30.245 15:00:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.245 15:00:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:30.245 15:00:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:30.245 15:00:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:30.245 15:00:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:30.245 15:00:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:30.245 15:00:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:30.245 Cannot find device "nvmf_tgt_br" 00:09:30.245 15:00:00 -- nvmf/common.sh@154 -- # true 00:09:30.245 15:00:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:30.245 Cannot find device "nvmf_tgt_br2" 00:09:30.245 15:00:00 -- nvmf/common.sh@155 -- # true 00:09:30.245 15:00:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:30.245 15:00:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:30.245 Cannot find device "nvmf_tgt_br" 00:09:30.245 15:00:00 -- nvmf/common.sh@157 -- # true 00:09:30.245 15:00:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:30.245 Cannot find device "nvmf_tgt_br2" 00:09:30.245 15:00:00 -- nvmf/common.sh@158 -- # true 00:09:30.245 15:00:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:30.245 15:00:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:30.245 15:00:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:30.245 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:30.245 15:00:00 -- nvmf/common.sh@161 -- # true 00:09:30.245 15:00:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:30.245 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:30.245 15:00:00 -- nvmf/common.sh@162 -- # true 00:09:30.245 15:00:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:30.245 15:00:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:30.245 15:00:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:30.245 15:00:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:30.505 15:00:01 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:30.505 15:00:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:30.505 15:00:01 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:30.505 15:00:01 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:30.505 15:00:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:30.505 15:00:01 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:30.505 15:00:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:30.505 15:00:01 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:30.505 15:00:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:30.505 15:00:01 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:30.505 15:00:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:30.505 15:00:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:30.505 15:00:01 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:30.505 15:00:01 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:30.505 15:00:01 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:30.505 15:00:01 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:30.505 15:00:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:30.505 15:00:01 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:30.505 15:00:01 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:30.505 15:00:01 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:30.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:09:30.505 00:09:30.505 --- 10.0.0.2 ping statistics --- 00:09:30.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.505 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:09:30.505 15:00:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:30.505 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:30.505 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:09:30.505 00:09:30.505 --- 10.0.0.3 ping statistics --- 00:09:30.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.505 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:09:30.505 15:00:01 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:30.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:09:30.505 00:09:30.505 --- 10.0.0.1 ping statistics --- 00:09:30.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.505 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:30.505 15:00:01 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.505 15:00:01 -- nvmf/common.sh@421 -- # return 0 00:09:30.505 15:00:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:30.505 15:00:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.505 15:00:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:30.505 15:00:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:30.505 15:00:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.505 15:00:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:30.505 15:00:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:30.765 15:00:01 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:30.765 15:00:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:30.765 15:00:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:30.765 15:00:01 -- common/autotest_common.sh@10 -- # set +x 00:09:30.765 15:00:01 -- nvmf/common.sh@469 -- # nvmfpid=72226 00:09:30.765 15:00:01 -- nvmf/common.sh@470 -- # waitforlisten 72226 00:09:30.765 15:00:01 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:30.765 15:00:01 -- common/autotest_common.sh@829 -- # '[' -z 72226 ']' 00:09:30.765 15:00:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.765 15:00:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:30.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.765 15:00:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.765 15:00:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:30.765 15:00:01 -- common/autotest_common.sh@10 -- # set +x 00:09:30.765 [2024-11-20 15:00:01.392477] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:30.765 [2024-11-20 15:00:01.392688] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.765 [2024-11-20 15:00:01.536321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:31.032 [2024-11-20 15:00:01.583366] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:31.032 [2024-11-20 15:00:01.583958] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.032 [2024-11-20 15:00:01.584000] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.032 [2024-11-20 15:00:01.584018] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.032 [2024-11-20 15:00:01.584391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.032 [2024-11-20 15:00:01.584471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.032 [2024-11-20 15:00:01.584482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.032 15:00:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:31.032 15:00:01 -- common/autotest_common.sh@862 -- # return 0 00:09:31.032 15:00:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:31.032 15:00:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:31.032 15:00:01 -- common/autotest_common.sh@10 -- # set +x 00:09:31.032 15:00:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.032 15:00:01 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:31.628 [2024-11-20 15:00:02.183057] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.628 15:00:02 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:31.887 15:00:02 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:31.887 15:00:02 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:32.453 15:00:03 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:32.453 15:00:03 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:32.711 15:00:03 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:33.276 15:00:03 -- target/nvmf_lvol.sh@29 -- # lvs=037d2ad7-36e8-4c4a-945a-7e8e71431f08 00:09:33.276 15:00:03 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 037d2ad7-36e8-4c4a-945a-7e8e71431f08 lvol 20 00:09:33.535 15:00:04 -- target/nvmf_lvol.sh@32 -- # lvol=b15d34c8-a087-4082-8a13-1263768f9fb7 00:09:33.535 15:00:04 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:34.102 15:00:04 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b15d34c8-a087-4082-8a13-1263768f9fb7 00:09:34.361 15:00:05 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:34.619 [2024-11-20 15:00:05.317929] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:34.619 15:00:05 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:34.877 15:00:05 -- target/nvmf_lvol.sh@42 -- # perf_pid=72305 00:09:34.877 15:00:05 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:34.877 15:00:05 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:35.812 15:00:06 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot b15d34c8-a087-4082-8a13-1263768f9fb7 MY_SNAPSHOT 00:09:36.380 15:00:06 -- target/nvmf_lvol.sh@47 -- # snapshot=7681b21c-1714-4f5b-b810-269939e47fdc 00:09:36.380 15:00:06 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize b15d34c8-a087-4082-8a13-1263768f9fb7 30 00:09:36.638 15:00:07 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 7681b21c-1714-4f5b-b810-269939e47fdc MY_CLONE 00:09:36.896 15:00:07 -- target/nvmf_lvol.sh@49 -- # clone=47679a31-52c7-4b63-9d24-edce3858b8f1 00:09:36.896 15:00:07 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 47679a31-52c7-4b63-9d24-edce3858b8f1 00:09:37.463 15:00:08 -- target/nvmf_lvol.sh@53 -- # wait 72305 00:09:45.574 Initializing NVMe Controllers 00:09:45.574 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:45.574 Controller IO queue size 128, less than required. 00:09:45.574 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:45.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:45.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:45.574 Initialization complete. Launching workers. 00:09:45.574 ======================================================== 00:09:45.574 Latency(us) 00:09:45.574 Device Information : IOPS MiB/s Average min max 00:09:45.574 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8516.33 33.27 15047.94 1697.20 59365.64 00:09:45.574 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8771.11 34.26 14600.19 2096.71 64497.73 00:09:45.574 ======================================================== 00:09:45.574 Total : 17287.44 67.53 14820.77 1697.20 64497.73 00:09:45.574 00:09:45.574 15:00:15 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:45.574 15:00:16 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b15d34c8-a087-4082-8a13-1263768f9fb7 00:09:45.837 15:00:16 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 037d2ad7-36e8-4c4a-945a-7e8e71431f08 00:09:46.402 15:00:16 -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:46.402 15:00:16 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:46.403 15:00:16 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:46.403 15:00:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:46.403 15:00:16 -- nvmf/common.sh@116 -- # sync 00:09:46.403 15:00:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:46.403 15:00:17 -- nvmf/common.sh@119 -- # set +e 00:09:46.403 15:00:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:46.403 15:00:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:46.403 rmmod nvme_tcp 00:09:46.403 rmmod nvme_fabrics 00:09:46.403 rmmod nvme_keyring 00:09:46.403 15:00:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:46.403 15:00:17 -- nvmf/common.sh@123 -- # set -e 00:09:46.403 15:00:17 -- nvmf/common.sh@124 -- # return 0 00:09:46.403 15:00:17 -- nvmf/common.sh@477 -- # '[' -n 72226 ']' 00:09:46.403 15:00:17 -- nvmf/common.sh@478 -- # killprocess 72226 00:09:46.403 15:00:17 -- common/autotest_common.sh@936 -- # '[' -z 72226 ']' 00:09:46.403 15:00:17 -- common/autotest_common.sh@940 -- # kill -0 72226 00:09:46.403 15:00:17 -- common/autotest_common.sh@941 -- # uname 00:09:46.403 15:00:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:46.403 15:00:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72226 00:09:46.403 killing process with pid 72226 00:09:46.403 15:00:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:46.403 15:00:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:46.403 15:00:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72226' 00:09:46.403 15:00:17 -- common/autotest_common.sh@955 -- # kill 72226 00:09:46.403 15:00:17 -- common/autotest_common.sh@960 -- # wait 72226 00:09:46.662 15:00:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:46.662 15:00:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:46.662 15:00:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:46.662 15:00:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:46.662 15:00:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:46.662 15:00:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.662 15:00:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:46.662 15:00:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.662 15:00:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:46.662 00:09:46.662 real 0m16.697s 00:09:46.662 user 1m7.521s 00:09:46.662 sys 0m5.554s 00:09:46.662 15:00:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:46.662 ************************************ 00:09:46.662 END TEST nvmf_lvol 00:09:46.662 ************************************ 00:09:46.662 15:00:17 -- common/autotest_common.sh@10 -- # set +x 00:09:46.662 15:00:17 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:46.662 15:00:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:46.662 15:00:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:46.662 15:00:17 -- common/autotest_common.sh@10 -- # set +x 00:09:46.662 ************************************ 00:09:46.662 START TEST nvmf_lvs_grow 00:09:46.662 ************************************ 00:09:46.662 15:00:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:46.662 * Looking for test storage... 00:09:46.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:46.662 15:00:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:46.662 15:00:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:46.662 15:00:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:46.921 15:00:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:46.921 15:00:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:46.921 15:00:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:46.921 15:00:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:46.921 15:00:17 -- scripts/common.sh@335 -- # IFS=.-: 00:09:46.921 15:00:17 -- scripts/common.sh@335 -- # read -ra ver1 00:09:46.921 15:00:17 -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.921 15:00:17 -- scripts/common.sh@336 -- # read -ra ver2 00:09:46.921 15:00:17 -- scripts/common.sh@337 -- # local 'op=<' 00:09:46.921 15:00:17 -- scripts/common.sh@339 -- # ver1_l=2 00:09:46.921 15:00:17 -- scripts/common.sh@340 -- # ver2_l=1 00:09:46.921 15:00:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:46.921 15:00:17 -- scripts/common.sh@343 -- # case "$op" in 00:09:46.921 15:00:17 -- scripts/common.sh@344 -- # : 1 00:09:46.921 15:00:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:46.921 15:00:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.921 15:00:17 -- scripts/common.sh@364 -- # decimal 1 00:09:46.921 15:00:17 -- scripts/common.sh@352 -- # local d=1 00:09:46.921 15:00:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.921 15:00:17 -- scripts/common.sh@354 -- # echo 1 00:09:46.921 15:00:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:46.921 15:00:17 -- scripts/common.sh@365 -- # decimal 2 00:09:46.921 15:00:17 -- scripts/common.sh@352 -- # local d=2 00:09:46.921 15:00:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.921 15:00:17 -- scripts/common.sh@354 -- # echo 2 00:09:46.921 15:00:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:46.921 15:00:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:46.921 15:00:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:46.921 15:00:17 -- scripts/common.sh@367 -- # return 0 00:09:46.921 15:00:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.921 15:00:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:46.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.921 --rc genhtml_branch_coverage=1 00:09:46.921 --rc genhtml_function_coverage=1 00:09:46.921 --rc genhtml_legend=1 00:09:46.921 --rc geninfo_all_blocks=1 00:09:46.921 --rc geninfo_unexecuted_blocks=1 00:09:46.921 00:09:46.921 ' 00:09:46.921 15:00:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:46.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.921 --rc genhtml_branch_coverage=1 00:09:46.921 --rc genhtml_function_coverage=1 00:09:46.921 --rc genhtml_legend=1 00:09:46.921 --rc geninfo_all_blocks=1 00:09:46.921 --rc geninfo_unexecuted_blocks=1 00:09:46.921 00:09:46.921 ' 00:09:46.921 15:00:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:46.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.921 --rc genhtml_branch_coverage=1 00:09:46.921 --rc genhtml_function_coverage=1 00:09:46.921 --rc genhtml_legend=1 00:09:46.921 --rc geninfo_all_blocks=1 00:09:46.921 --rc geninfo_unexecuted_blocks=1 00:09:46.921 00:09:46.921 ' 00:09:46.921 15:00:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:46.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.921 --rc genhtml_branch_coverage=1 00:09:46.921 --rc genhtml_function_coverage=1 00:09:46.921 --rc genhtml_legend=1 00:09:46.921 --rc geninfo_all_blocks=1 00:09:46.921 --rc geninfo_unexecuted_blocks=1 00:09:46.921 00:09:46.921 ' 00:09:46.921 15:00:17 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:46.921 15:00:17 -- nvmf/common.sh@7 -- # uname -s 00:09:46.921 15:00:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.921 15:00:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.921 15:00:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.921 15:00:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.921 15:00:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.921 15:00:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.921 15:00:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.921 15:00:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.921 15:00:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.921 15:00:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.921 15:00:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:09:46.921 15:00:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:09:46.921 15:00:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.921 15:00:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.921 15:00:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:46.921 15:00:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:46.921 15:00:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.921 15:00:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.921 15:00:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.921 15:00:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.921 15:00:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.921 15:00:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.921 15:00:17 -- paths/export.sh@5 -- # export PATH 00:09:46.921 15:00:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.921 15:00:17 -- nvmf/common.sh@46 -- # : 0 00:09:46.921 15:00:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:46.921 15:00:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:46.921 15:00:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:46.921 15:00:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.921 15:00:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.921 15:00:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:46.921 15:00:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:46.921 15:00:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:46.921 15:00:17 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:46.921 15:00:17 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:46.921 15:00:17 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:09:46.921 15:00:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:46.921 15:00:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.921 15:00:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:46.921 15:00:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:46.921 15:00:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:46.921 15:00:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.921 15:00:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:46.921 15:00:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.921 15:00:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:46.921 15:00:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:46.921 15:00:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:46.921 15:00:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:46.921 15:00:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:46.921 15:00:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:46.921 15:00:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.921 15:00:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:46.921 15:00:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:46.922 15:00:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:46.922 15:00:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:46.922 15:00:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:46.922 15:00:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:46.922 15:00:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.922 15:00:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:46.922 15:00:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:46.922 15:00:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:46.922 15:00:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:46.922 15:00:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:46.922 15:00:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:46.922 Cannot find device "nvmf_tgt_br" 00:09:46.922 15:00:17 -- nvmf/common.sh@154 -- # true 00:09:46.922 15:00:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:46.922 Cannot find device "nvmf_tgt_br2" 00:09:46.922 15:00:17 -- nvmf/common.sh@155 -- # true 00:09:46.922 15:00:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:46.922 15:00:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:46.922 Cannot find device "nvmf_tgt_br" 00:09:46.922 15:00:17 -- nvmf/common.sh@157 -- # true 00:09:46.922 15:00:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:46.922 Cannot find device "nvmf_tgt_br2" 00:09:46.922 15:00:17 -- nvmf/common.sh@158 -- # true 00:09:46.922 15:00:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:46.922 15:00:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:47.181 15:00:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:47.181 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:47.181 15:00:17 -- nvmf/common.sh@161 -- # true 00:09:47.181 15:00:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:47.181 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:47.181 15:00:17 -- nvmf/common.sh@162 -- # true 00:09:47.181 15:00:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:47.181 15:00:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:47.181 15:00:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:47.181 15:00:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:47.181 15:00:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:47.181 15:00:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:47.181 15:00:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:47.181 15:00:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:47.181 15:00:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:47.181 15:00:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:47.181 15:00:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:47.181 15:00:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:47.181 15:00:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:47.181 15:00:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:47.181 15:00:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:47.181 15:00:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:47.181 15:00:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:47.181 15:00:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:47.181 15:00:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:47.181 15:00:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:47.181 15:00:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:47.181 15:00:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:47.181 15:00:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:47.181 15:00:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:47.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:09:47.181 00:09:47.181 --- 10.0.0.2 ping statistics --- 00:09:47.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.181 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:09:47.181 15:00:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:47.181 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:47.181 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:09:47.181 00:09:47.181 --- 10.0.0.3 ping statistics --- 00:09:47.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.181 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:47.181 15:00:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:47.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:09:47.181 00:09:47.181 --- 10.0.0.1 ping statistics --- 00:09:47.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.181 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:47.181 15:00:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.181 15:00:17 -- nvmf/common.sh@421 -- # return 0 00:09:47.181 15:00:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:47.181 15:00:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.181 15:00:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:47.181 15:00:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:47.181 15:00:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.181 15:00:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:47.181 15:00:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:47.181 15:00:17 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:09:47.181 15:00:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:47.181 15:00:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:47.181 15:00:17 -- common/autotest_common.sh@10 -- # set +x 00:09:47.181 15:00:17 -- nvmf/common.sh@469 -- # nvmfpid=72636 00:09:47.181 15:00:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:47.181 15:00:17 -- nvmf/common.sh@470 -- # waitforlisten 72636 00:09:47.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.181 15:00:17 -- common/autotest_common.sh@829 -- # '[' -z 72636 ']' 00:09:47.181 15:00:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.182 15:00:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:47.182 15:00:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.182 15:00:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:47.182 15:00:17 -- common/autotest_common.sh@10 -- # set +x 00:09:47.440 [2024-11-20 15:00:18.004660] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:47.440 [2024-11-20 15:00:18.004795] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.440 [2024-11-20 15:00:18.149434] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.440 [2024-11-20 15:00:18.186305] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:47.440 [2024-11-20 15:00:18.186458] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.440 [2024-11-20 15:00:18.186473] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.440 [2024-11-20 15:00:18.186482] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.440 [2024-11-20 15:00:18.186516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.375 15:00:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:48.375 15:00:19 -- common/autotest_common.sh@862 -- # return 0 00:09:48.375 15:00:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:48.375 15:00:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:48.375 15:00:19 -- common/autotest_common.sh@10 -- # set +x 00:09:48.375 15:00:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.375 15:00:19 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:48.634 [2024-11-20 15:00:19.301911] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.634 15:00:19 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:09:48.634 15:00:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:48.634 15:00:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:48.634 15:00:19 -- common/autotest_common.sh@10 -- # set +x 00:09:48.634 ************************************ 00:09:48.634 START TEST lvs_grow_clean 00:09:48.634 ************************************ 00:09:48.634 15:00:19 -- common/autotest_common.sh@1114 -- # lvs_grow 00:09:48.634 15:00:19 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:48.634 15:00:19 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:48.634 15:00:19 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:48.634 15:00:19 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:48.634 15:00:19 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:48.634 15:00:19 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:48.634 15:00:19 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:48.634 15:00:19 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:48.634 15:00:19 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:48.892 15:00:19 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:49.151 15:00:19 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:49.409 15:00:19 -- target/nvmf_lvs_grow.sh@28 -- # lvs=59c4aef0-c13c-44eb-a246-51818f646e1a 00:09:49.409 15:00:19 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:49.409 15:00:19 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59c4aef0-c13c-44eb-a246-51818f646e1a 00:09:49.667 15:00:20 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:49.667 15:00:20 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:49.667 15:00:20 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 59c4aef0-c13c-44eb-a246-51818f646e1a lvol 150 00:09:49.925 15:00:20 -- target/nvmf_lvs_grow.sh@33 -- # lvol=58479105-bc9b-4ab9-9d48-27d73862d545 00:09:49.925 15:00:20 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:49.925 15:00:20 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:50.183 [2024-11-20 15:00:20.818725] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:50.183 [2024-11-20 15:00:20.819071] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:50.183 true 00:09:50.183 15:00:20 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:50.183 15:00:20 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59c4aef0-c13c-44eb-a246-51818f646e1a 00:09:50.441 15:00:21 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:50.441 15:00:21 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:50.799 15:00:21 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 58479105-bc9b-4ab9-9d48-27d73862d545 00:09:51.056 15:00:21 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:51.315 [2024-11-20 15:00:22.019450] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:51.315 15:00:22 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:51.574 15:00:22 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:51.574 15:00:22 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72724 00:09:51.574 15:00:22 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:51.574 15:00:22 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72724 /var/tmp/bdevperf.sock 00:09:51.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:51.574 15:00:22 -- common/autotest_common.sh@829 -- # '[' -z 72724 ']' 00:09:51.574 15:00:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:51.574 15:00:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:51.574 15:00:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:51.574 15:00:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:51.574 15:00:22 -- common/autotest_common.sh@10 -- # set +x 00:09:51.574 [2024-11-20 15:00:22.361494] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:51.574 [2024-11-20 15:00:22.361991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72724 ] 00:09:51.832 [2024-11-20 15:00:22.496741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.832 [2024-11-20 15:00:22.539140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.765 15:00:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:52.765 15:00:23 -- common/autotest_common.sh@862 -- # return 0 00:09:52.765 15:00:23 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:53.023 Nvme0n1 00:09:53.023 15:00:23 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:53.281 [ 00:09:53.281 { 00:09:53.281 "name": "Nvme0n1", 00:09:53.281 "aliases": [ 00:09:53.281 "58479105-bc9b-4ab9-9d48-27d73862d545" 00:09:53.281 ], 00:09:53.281 "product_name": "NVMe disk", 00:09:53.281 "block_size": 4096, 00:09:53.281 "num_blocks": 38912, 00:09:53.281 "uuid": "58479105-bc9b-4ab9-9d48-27d73862d545", 00:09:53.281 "assigned_rate_limits": { 00:09:53.281 "rw_ios_per_sec": 0, 00:09:53.281 "rw_mbytes_per_sec": 0, 00:09:53.281 "r_mbytes_per_sec": 0, 00:09:53.281 "w_mbytes_per_sec": 0 00:09:53.281 }, 00:09:53.281 "claimed": false, 00:09:53.281 "zoned": false, 00:09:53.281 "supported_io_types": { 00:09:53.281 "read": true, 00:09:53.281 "write": true, 00:09:53.281 "unmap": true, 00:09:53.281 "write_zeroes": true, 00:09:53.281 "flush": true, 00:09:53.281 "reset": true, 00:09:53.281 "compare": true, 00:09:53.281 "compare_and_write": true, 00:09:53.281 "abort": true, 00:09:53.281 "nvme_admin": true, 00:09:53.281 "nvme_io": true 00:09:53.281 }, 00:09:53.281 "driver_specific": { 00:09:53.281 "nvme": [ 00:09:53.281 { 00:09:53.281 "trid": { 00:09:53.281 "trtype": "TCP", 00:09:53.281 "adrfam": "IPv4", 00:09:53.281 "traddr": "10.0.0.2", 00:09:53.281 "trsvcid": "4420", 00:09:53.281 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:53.281 }, 00:09:53.281 "ctrlr_data": { 00:09:53.281 "cntlid": 1, 00:09:53.281 "vendor_id": "0x8086", 00:09:53.281 "model_number": "SPDK bdev Controller", 00:09:53.281 "serial_number": "SPDK0", 00:09:53.281 "firmware_revision": "24.01.1", 00:09:53.281 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:53.281 "oacs": { 00:09:53.281 "security": 0, 00:09:53.281 "format": 0, 00:09:53.281 "firmware": 0, 00:09:53.281 "ns_manage": 0 00:09:53.281 }, 00:09:53.281 "multi_ctrlr": true, 00:09:53.281 "ana_reporting": false 00:09:53.281 }, 00:09:53.281 "vs": { 00:09:53.281 "nvme_version": "1.3" 00:09:53.281 }, 00:09:53.281 "ns_data": { 00:09:53.281 "id": 1, 00:09:53.281 "can_share": true 00:09:53.281 } 00:09:53.281 } 00:09:53.281 ], 00:09:53.281 "mp_policy": "active_passive" 00:09:53.281 } 00:09:53.281 } 00:09:53.281 ] 00:09:53.281 15:00:24 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72753 00:09:53.281 15:00:24 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:53.281 15:00:24 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:53.540 Running I/O for 10 seconds... 00:09:54.472 Latency(us) 00:09:54.472 [2024-11-20T15:00:25.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.472 [2024-11-20T15:00:25.276Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.472 Nvme0n1 : 1.00 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:09:54.472 [2024-11-20T15:00:25.276Z] =================================================================================================================== 00:09:54.472 [2024-11-20T15:00:25.276Z] Total : 7112.00 27.78 0.00 0.00 0.00 0.00 0.00 00:09:54.472 00:09:55.402 15:00:26 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 59c4aef0-c13c-44eb-a246-51818f646e1a 00:09:55.402 [2024-11-20T15:00:26.206Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.402 Nvme0n1 : 2.00 7048.50 27.53 0.00 0.00 0.00 0.00 0.00 00:09:55.402 [2024-11-20T15:00:26.206Z] =================================================================================================================== 00:09:55.402 [2024-11-20T15:00:26.206Z] Total : 7048.50 27.53 0.00 0.00 0.00 0.00 0.00 00:09:55.402 00:09:55.659 true 00:09:55.659 15:00:26 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59c4aef0-c13c-44eb-a246-51818f646e1a 00:09:55.659 15:00:26 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:55.916 15:00:26 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:55.916 15:00:26 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:55.916 15:00:26 -- target/nvmf_lvs_grow.sh@65 -- # wait 72753 00:09:56.480 [2024-11-20T15:00:27.284Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:56.480 Nvme0n1 : 3.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:56.480 [2024-11-20T15:00:27.284Z] =================================================================================================================== 00:09:56.480 [2024-11-20T15:00:27.284Z] Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:56.480 00:09:57.413 [2024-11-20T15:00:28.217Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.413 Nvme0n1 : 4.00 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:09:57.413 [2024-11-20T15:00:28.217Z] =================================================================================================================== 00:09:57.413 [2024-11-20T15:00:28.217Z] Total : 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:09:57.413 00:09:58.785 [2024-11-20T15:00:29.589Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:58.785 Nvme0n1 : 5.00 6807.20 26.59 0.00 0.00 0.00 0.00 0.00 00:09:58.785 [2024-11-20T15:00:29.589Z] =================================================================================================================== 00:09:58.785 [2024-11-20T15:00:29.589Z] Total : 6807.20 26.59 0.00 0.00 0.00 0.00 0.00 00:09:58.785 00:09:59.720 [2024-11-20T15:00:30.524Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:59.720 Nvme0n1 : 6.00 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:09:59.720 [2024-11-20T15:00:30.524Z] =================================================================================================================== 00:09:59.720 [2024-11-20T15:00:30.524Z] Total : 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:09:59.720 00:10:00.654 [2024-11-20T15:00:31.458Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:00.654 Nvme0n1 : 7.00 6785.43 26.51 0.00 0.00 0.00 0.00 0.00 00:10:00.654 [2024-11-20T15:00:31.458Z] =================================================================================================================== 00:10:00.654 [2024-11-20T15:00:31.458Z] Total : 6785.43 26.51 0.00 0.00 0.00 0.00 0.00 00:10:00.654 00:10:01.588 [2024-11-20T15:00:32.392Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:01.588 Nvme0n1 : 8.00 6778.62 26.48 0.00 0.00 0.00 0.00 0.00 00:10:01.588 [2024-11-20T15:00:32.392Z] =================================================================================================================== 00:10:01.588 [2024-11-20T15:00:32.392Z] Total : 6778.62 26.48 0.00 0.00 0.00 0.00 0.00 00:10:01.588 00:10:02.522 [2024-11-20T15:00:33.326Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:02.522 Nvme0n1 : 9.00 6787.44 26.51 0.00 0.00 0.00 0.00 0.00 00:10:02.522 [2024-11-20T15:00:33.326Z] =================================================================================================================== 00:10:02.522 [2024-11-20T15:00:33.326Z] Total : 6787.44 26.51 0.00 0.00 0.00 0.00 0.00 00:10:02.522 00:10:03.485 [2024-11-20T15:00:34.289Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:03.485 Nvme0n1 : 10.00 6781.80 26.49 0.00 0.00 0.00 0.00 0.00 00:10:03.485 [2024-11-20T15:00:34.289Z] =================================================================================================================== 00:10:03.485 [2024-11-20T15:00:34.289Z] Total : 6781.80 26.49 0.00 0.00 0.00 0.00 0.00 00:10:03.485 00:10:03.485 00:10:03.485 Latency(us) 00:10:03.485 [2024-11-20T15:00:34.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.485 [2024-11-20T15:00:34.289Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:03.485 Nvme0n1 : 10.02 6783.22 26.50 0.00 0.00 18864.90 15728.64 54096.99 00:10:03.485 [2024-11-20T15:00:34.289Z] =================================================================================================================== 00:10:03.485 [2024-11-20T15:00:34.289Z] Total : 6783.22 26.50 0.00 0.00 18864.90 15728.64 54096.99 00:10:03.485 0 00:10:03.485 15:00:34 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72724 00:10:03.485 15:00:34 -- common/autotest_common.sh@936 -- # '[' -z 72724 ']' 00:10:03.485 15:00:34 -- common/autotest_common.sh@940 -- # kill -0 72724 00:10:03.485 15:00:34 -- common/autotest_common.sh@941 -- # uname 00:10:03.485 15:00:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:03.485 15:00:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72724 00:10:03.485 killing process with pid 72724 00:10:03.485 Received shutdown signal, test time was about 10.000000 seconds 00:10:03.485 00:10:03.485 Latency(us) 00:10:03.485 [2024-11-20T15:00:34.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:03.485 [2024-11-20T15:00:34.289Z] =================================================================================================================== 00:10:03.485 [2024-11-20T15:00:34.289Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:03.485 15:00:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:03.485 15:00:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:03.485 15:00:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72724' 00:10:03.485 15:00:34 -- common/autotest_common.sh@955 -- # kill 72724 00:10:03.485 15:00:34 -- common/autotest_common.sh@960 -- # wait 72724 00:10:03.744 15:00:34 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:04.002 15:00:34 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:10:04.002 15:00:34 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59c4aef0-c13c-44eb-a246-51818f646e1a 00:10:04.260 15:00:35 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:10:04.260 15:00:35 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:10:04.260 15:00:35 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:04.518 [2024-11-20 15:00:35.244404] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:04.518 15:00:35 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59c4aef0-c13c-44eb-a246-51818f646e1a 00:10:04.518 15:00:35 -- common/autotest_common.sh@650 -- # local es=0 00:10:04.518 15:00:35 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59c4aef0-c13c-44eb-a246-51818f646e1a 00:10:04.518 15:00:35 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:04.518 15:00:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:04.518 15:00:35 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:04.518 15:00:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:04.518 15:00:35 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:04.518 15:00:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:04.518 15:00:35 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:04.518 15:00:35 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:04.518 15:00:35 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59c4aef0-c13c-44eb-a246-51818f646e1a 00:10:04.777 request: 00:10:04.777 { 00:10:04.777 "uuid": "59c4aef0-c13c-44eb-a246-51818f646e1a", 00:10:04.777 "method": "bdev_lvol_get_lvstores", 00:10:04.777 "req_id": 1 00:10:04.777 } 00:10:04.777 Got JSON-RPC error response 00:10:04.777 response: 00:10:04.777 { 00:10:04.777 "code": -19, 00:10:04.777 "message": "No such device" 00:10:04.777 } 00:10:04.777 15:00:35 -- common/autotest_common.sh@653 -- # es=1 00:10:04.777 15:00:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:04.777 15:00:35 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:04.777 15:00:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:04.777 15:00:35 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:05.037 aio_bdev 00:10:05.323 15:00:35 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 58479105-bc9b-4ab9-9d48-27d73862d545 00:10:05.323 15:00:35 -- common/autotest_common.sh@897 -- # local bdev_name=58479105-bc9b-4ab9-9d48-27d73862d545 00:10:05.323 15:00:35 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:05.323 15:00:35 -- common/autotest_common.sh@899 -- # local i 00:10:05.323 15:00:35 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:05.323 15:00:35 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:05.323 15:00:35 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:05.323 15:00:36 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 58479105-bc9b-4ab9-9d48-27d73862d545 -t 2000 00:10:05.582 [ 00:10:05.582 { 00:10:05.582 "name": "58479105-bc9b-4ab9-9d48-27d73862d545", 00:10:05.582 "aliases": [ 00:10:05.582 "lvs/lvol" 00:10:05.582 ], 00:10:05.582 "product_name": "Logical Volume", 00:10:05.582 "block_size": 4096, 00:10:05.582 "num_blocks": 38912, 00:10:05.582 "uuid": "58479105-bc9b-4ab9-9d48-27d73862d545", 00:10:05.582 "assigned_rate_limits": { 00:10:05.582 "rw_ios_per_sec": 0, 00:10:05.582 "rw_mbytes_per_sec": 0, 00:10:05.582 "r_mbytes_per_sec": 0, 00:10:05.582 "w_mbytes_per_sec": 0 00:10:05.582 }, 00:10:05.582 "claimed": false, 00:10:05.582 "zoned": false, 00:10:05.582 "supported_io_types": { 00:10:05.582 "read": true, 00:10:05.582 "write": true, 00:10:05.582 "unmap": true, 00:10:05.582 "write_zeroes": true, 00:10:05.582 "flush": false, 00:10:05.582 "reset": true, 00:10:05.582 "compare": false, 00:10:05.582 "compare_and_write": false, 00:10:05.582 "abort": false, 00:10:05.582 "nvme_admin": false, 00:10:05.582 "nvme_io": false 00:10:05.582 }, 00:10:05.582 "driver_specific": { 00:10:05.582 "lvol": { 00:10:05.582 "lvol_store_uuid": "59c4aef0-c13c-44eb-a246-51818f646e1a", 00:10:05.582 "base_bdev": "aio_bdev", 00:10:05.582 "thin_provision": false, 00:10:05.582 "snapshot": false, 00:10:05.582 "clone": false, 00:10:05.582 "esnap_clone": false 00:10:05.582 } 00:10:05.582 } 00:10:05.582 } 00:10:05.582 ] 00:10:05.582 15:00:36 -- common/autotest_common.sh@905 -- # return 0 00:10:05.582 15:00:36 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59c4aef0-c13c-44eb-a246-51818f646e1a 00:10:05.582 15:00:36 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:10:06.150 15:00:36 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:10:06.150 15:00:36 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59c4aef0-c13c-44eb-a246-51818f646e1a 00:10:06.150 15:00:36 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:10:06.150 15:00:36 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:10:06.150 15:00:36 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 58479105-bc9b-4ab9-9d48-27d73862d545 00:10:06.717 15:00:37 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 59c4aef0-c13c-44eb-a246-51818f646e1a 00:10:07.004 15:00:37 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:07.263 15:00:37 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:07.521 ************************************ 00:10:07.521 END TEST lvs_grow_clean 00:10:07.521 ************************************ 00:10:07.521 00:10:07.521 real 0m18.952s 00:10:07.521 user 0m18.173s 00:10:07.521 sys 0m2.502s 00:10:07.521 15:00:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:07.521 15:00:38 -- common/autotest_common.sh@10 -- # set +x 00:10:07.780 15:00:38 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:07.780 15:00:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:07.780 15:00:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:07.780 15:00:38 -- common/autotest_common.sh@10 -- # set +x 00:10:07.780 ************************************ 00:10:07.780 START TEST lvs_grow_dirty 00:10:07.780 ************************************ 00:10:07.780 15:00:38 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:10:07.780 15:00:38 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:07.780 15:00:38 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:07.780 15:00:38 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:07.780 15:00:38 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:07.780 15:00:38 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:07.780 15:00:38 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:07.780 15:00:38 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:07.780 15:00:38 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:07.780 15:00:38 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:08.039 15:00:38 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:08.039 15:00:38 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:08.297 15:00:38 -- target/nvmf_lvs_grow.sh@28 -- # lvs=a429ea2d-3566-4ce9-bd56-39b02c430cf7 00:10:08.297 15:00:38 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a429ea2d-3566-4ce9-bd56-39b02c430cf7 00:10:08.297 15:00:38 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:08.558 15:00:39 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:08.558 15:00:39 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:08.558 15:00:39 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a429ea2d-3566-4ce9-bd56-39b02c430cf7 lvol 150 00:10:08.818 15:00:39 -- target/nvmf_lvs_grow.sh@33 -- # lvol=932b4c01-9110-4871-94bb-c1f295d0ecb7 00:10:08.818 15:00:39 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:08.818 15:00:39 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:09.076 [2024-11-20 15:00:39.724682] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:09.076 [2024-11-20 15:00:39.724772] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:09.076 true 00:10:09.076 15:00:39 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:09.076 15:00:39 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a429ea2d-3566-4ce9-bd56-39b02c430cf7 00:10:09.337 15:00:40 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:09.337 15:00:40 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:09.600 15:00:40 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 932b4c01-9110-4871-94bb-c1f295d0ecb7 00:10:09.859 15:00:40 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:10.117 15:00:40 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:10.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:10.378 15:00:41 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=73005 00:10:10.378 15:00:41 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:10.378 15:00:41 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:10.378 15:00:41 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 73005 /var/tmp/bdevperf.sock 00:10:10.378 15:00:41 -- common/autotest_common.sh@829 -- # '[' -z 73005 ']' 00:10:10.378 15:00:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:10.378 15:00:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:10.378 15:00:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:10.378 15:00:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:10.378 15:00:41 -- common/autotest_common.sh@10 -- # set +x 00:10:10.378 [2024-11-20 15:00:41.115632] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:10.378 [2024-11-20 15:00:41.116004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73005 ] 00:10:10.637 [2024-11-20 15:00:41.251149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.637 [2024-11-20 15:00:41.294965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.571 15:00:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:11.571 15:00:42 -- common/autotest_common.sh@862 -- # return 0 00:10:11.571 15:00:42 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:11.829 Nvme0n1 00:10:11.829 15:00:42 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:12.087 [ 00:10:12.087 { 00:10:12.087 "name": "Nvme0n1", 00:10:12.087 "aliases": [ 00:10:12.087 "932b4c01-9110-4871-94bb-c1f295d0ecb7" 00:10:12.087 ], 00:10:12.087 "product_name": "NVMe disk", 00:10:12.087 "block_size": 4096, 00:10:12.087 "num_blocks": 38912, 00:10:12.088 "uuid": "932b4c01-9110-4871-94bb-c1f295d0ecb7", 00:10:12.088 "assigned_rate_limits": { 00:10:12.088 "rw_ios_per_sec": 0, 00:10:12.088 "rw_mbytes_per_sec": 0, 00:10:12.088 "r_mbytes_per_sec": 0, 00:10:12.088 "w_mbytes_per_sec": 0 00:10:12.088 }, 00:10:12.088 "claimed": false, 00:10:12.088 "zoned": false, 00:10:12.088 "supported_io_types": { 00:10:12.088 "read": true, 00:10:12.088 "write": true, 00:10:12.088 "unmap": true, 00:10:12.088 "write_zeroes": true, 00:10:12.088 "flush": true, 00:10:12.088 "reset": true, 00:10:12.088 "compare": true, 00:10:12.088 "compare_and_write": true, 00:10:12.088 "abort": true, 00:10:12.088 "nvme_admin": true, 00:10:12.088 "nvme_io": true 00:10:12.088 }, 00:10:12.088 "driver_specific": { 00:10:12.088 "nvme": [ 00:10:12.088 { 00:10:12.088 "trid": { 00:10:12.088 "trtype": "TCP", 00:10:12.088 "adrfam": "IPv4", 00:10:12.088 "traddr": "10.0.0.2", 00:10:12.088 "trsvcid": "4420", 00:10:12.088 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:12.088 }, 00:10:12.088 "ctrlr_data": { 00:10:12.088 "cntlid": 1, 00:10:12.088 "vendor_id": "0x8086", 00:10:12.088 "model_number": "SPDK bdev Controller", 00:10:12.088 "serial_number": "SPDK0", 00:10:12.088 "firmware_revision": "24.01.1", 00:10:12.088 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:12.088 "oacs": { 00:10:12.088 "security": 0, 00:10:12.088 "format": 0, 00:10:12.088 "firmware": 0, 00:10:12.088 "ns_manage": 0 00:10:12.088 }, 00:10:12.088 "multi_ctrlr": true, 00:10:12.088 "ana_reporting": false 00:10:12.088 }, 00:10:12.088 "vs": { 00:10:12.088 "nvme_version": "1.3" 00:10:12.088 }, 00:10:12.088 "ns_data": { 00:10:12.088 "id": 1, 00:10:12.088 "can_share": true 00:10:12.088 } 00:10:12.088 } 00:10:12.088 ], 00:10:12.088 "mp_policy": "active_passive" 00:10:12.088 } 00:10:12.088 } 00:10:12.088 ] 00:10:12.346 15:00:42 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=73023 00:10:12.346 15:00:42 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:12.346 15:00:42 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:12.346 Running I/O for 10 seconds... 00:10:13.279 Latency(us) 00:10:13.279 [2024-11-20T15:00:44.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:13.279 [2024-11-20T15:00:44.083Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:13.279 Nvme0n1 : 1.00 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:10:13.279 [2024-11-20T15:00:44.083Z] =================================================================================================================== 00:10:13.279 [2024-11-20T15:00:44.083Z] Total : 7493.00 29.27 0.00 0.00 0.00 0.00 0.00 00:10:13.279 00:10:14.215 15:00:44 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a429ea2d-3566-4ce9-bd56-39b02c430cf7 00:10:14.215 [2024-11-20T15:00:45.019Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:14.215 Nvme0n1 : 2.00 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:10:14.215 [2024-11-20T15:00:45.019Z] =================================================================================================================== 00:10:14.215 [2024-11-20T15:00:45.019Z] Total : 7366.00 28.77 0.00 0.00 0.00 0.00 0.00 00:10:14.215 00:10:14.474 true 00:10:14.474 15:00:45 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a429ea2d-3566-4ce9-bd56-39b02c430cf7 00:10:14.474 15:00:45 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:14.732 15:00:45 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:14.732 15:00:45 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:14.732 15:00:45 -- target/nvmf_lvs_grow.sh@65 -- # wait 73023 00:10:15.298 [2024-11-20T15:00:46.102Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:15.298 Nvme0n1 : 3.00 7323.67 28.61 0.00 0.00 0.00 0.00 0.00 00:10:15.298 [2024-11-20T15:00:46.102Z] =================================================================================================================== 00:10:15.298 [2024-11-20T15:00:46.102Z] Total : 7323.67 28.61 0.00 0.00 0.00 0.00 0.00 00:10:15.298 00:10:16.233 [2024-11-20T15:00:47.037Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:16.233 Nvme0n1 : 4.00 7190.50 28.09 0.00 0.00 0.00 0.00 0.00 00:10:16.233 [2024-11-20T15:00:47.037Z] =================================================================================================================== 00:10:16.233 [2024-11-20T15:00:47.037Z] Total : 7190.50 28.09 0.00 0.00 0.00 0.00 0.00 00:10:16.233 00:10:17.617 [2024-11-20T15:00:48.421Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.617 Nvme0n1 : 5.00 7051.00 27.54 0.00 0.00 0.00 0.00 0.00 00:10:17.617 [2024-11-20T15:00:48.421Z] =================================================================================================================== 00:10:17.617 [2024-11-20T15:00:48.421Z] Total : 7051.00 27.54 0.00 0.00 0.00 0.00 0.00 00:10:17.617 00:10:18.553 [2024-11-20T15:00:49.358Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:18.554 Nvme0n1 : 6.00 6956.33 27.17 0.00 0.00 0.00 0.00 0.00 00:10:18.554 [2024-11-20T15:00:49.358Z] =================================================================================================================== 00:10:18.554 [2024-11-20T15:00:49.358Z] Total : 6956.33 27.17 0.00 0.00 0.00 0.00 0.00 00:10:18.554 00:10:19.487 [2024-11-20T15:00:50.291Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:19.487 Nvme0n1 : 7.00 6924.14 27.05 0.00 0.00 0.00 0.00 0.00 00:10:19.487 [2024-11-20T15:00:50.291Z] =================================================================================================================== 00:10:19.487 [2024-11-20T15:00:50.291Z] Total : 6924.14 27.05 0.00 0.00 0.00 0.00 0.00 00:10:19.487 00:10:20.420 [2024-11-20T15:00:51.224Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:20.420 Nvme0n1 : 8.00 6900.00 26.95 0.00 0.00 0.00 0.00 0.00 00:10:20.420 [2024-11-20T15:00:51.224Z] =================================================================================================================== 00:10:20.420 [2024-11-20T15:00:51.224Z] Total : 6900.00 26.95 0.00 0.00 0.00 0.00 0.00 00:10:20.420 00:10:21.353 [2024-11-20T15:00:52.157Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:21.353 Nvme0n1 : 9.00 6881.22 26.88 0.00 0.00 0.00 0.00 0.00 00:10:21.353 [2024-11-20T15:00:52.157Z] =================================================================================================================== 00:10:21.353 [2024-11-20T15:00:52.157Z] Total : 6881.22 26.88 0.00 0.00 0.00 0.00 0.00 00:10:21.353 00:10:22.288 [2024-11-20T15:00:53.092Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:22.288 Nvme0n1 : 10.00 6828.10 26.67 0.00 0.00 0.00 0.00 0.00 00:10:22.288 [2024-11-20T15:00:53.092Z] =================================================================================================================== 00:10:22.288 [2024-11-20T15:00:53.092Z] Total : 6828.10 26.67 0.00 0.00 0.00 0.00 0.00 00:10:22.288 00:10:22.288 00:10:22.288 Latency(us) 00:10:22.288 [2024-11-20T15:00:53.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.288 [2024-11-20T15:00:53.092Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:22.288 Nvme0n1 : 10.01 6833.44 26.69 0.00 0.00 18725.27 3530.01 89605.59 00:10:22.288 [2024-11-20T15:00:53.092Z] =================================================================================================================== 00:10:22.288 [2024-11-20T15:00:53.092Z] Total : 6833.44 26.69 0.00 0.00 18725.27 3530.01 89605.59 00:10:22.288 0 00:10:22.288 15:00:53 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 73005 00:10:22.288 15:00:53 -- common/autotest_common.sh@936 -- # '[' -z 73005 ']' 00:10:22.288 15:00:53 -- common/autotest_common.sh@940 -- # kill -0 73005 00:10:22.288 15:00:53 -- common/autotest_common.sh@941 -- # uname 00:10:22.288 15:00:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:22.288 15:00:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73005 00:10:22.288 killing process with pid 73005 00:10:22.288 Received shutdown signal, test time was about 10.000000 seconds 00:10:22.288 00:10:22.288 Latency(us) 00:10:22.288 [2024-11-20T15:00:53.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.288 [2024-11-20T15:00:53.092Z] =================================================================================================================== 00:10:22.288 [2024-11-20T15:00:53.092Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:22.288 15:00:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:22.288 15:00:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:22.288 15:00:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73005' 00:10:22.288 15:00:53 -- common/autotest_common.sh@955 -- # kill 73005 00:10:22.288 15:00:53 -- common/autotest_common.sh@960 -- # wait 73005 00:10:22.546 15:00:53 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:22.805 15:00:53 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a429ea2d-3566-4ce9-bd56-39b02c430cf7 00:10:22.805 15:00:53 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:10:23.065 15:00:53 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:10:23.065 15:00:53 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:10:23.065 15:00:53 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 72636 00:10:23.065 15:00:53 -- target/nvmf_lvs_grow.sh@74 -- # wait 72636 00:10:23.065 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 72636 Killed "${NVMF_APP[@]}" "$@" 00:10:23.065 15:00:53 -- target/nvmf_lvs_grow.sh@74 -- # true 00:10:23.065 15:00:53 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:10:23.065 15:00:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:23.065 15:00:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:23.065 15:00:53 -- common/autotest_common.sh@10 -- # set +x 00:10:23.065 15:00:53 -- nvmf/common.sh@469 -- # nvmfpid=73159 00:10:23.065 15:00:53 -- nvmf/common.sh@470 -- # waitforlisten 73159 00:10:23.065 15:00:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:23.065 15:00:53 -- common/autotest_common.sh@829 -- # '[' -z 73159 ']' 00:10:23.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.065 15:00:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.065 15:00:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:23.065 15:00:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.065 15:00:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:23.065 15:00:53 -- common/autotest_common.sh@10 -- # set +x 00:10:23.324 [2024-11-20 15:00:53.915305] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:23.325 [2024-11-20 15:00:53.915750] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.325 [2024-11-20 15:00:54.069853] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.325 [2024-11-20 15:00:54.107277] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:23.325 [2024-11-20 15:00:54.107687] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:23.325 [2024-11-20 15:00:54.107710] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:23.325 [2024-11-20 15:00:54.107721] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:23.325 [2024-11-20 15:00:54.107751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.583 15:00:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:23.583 15:00:54 -- common/autotest_common.sh@862 -- # return 0 00:10:23.583 15:00:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:23.583 15:00:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:23.583 15:00:54 -- common/autotest_common.sh@10 -- # set +x 00:10:23.583 15:00:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:23.583 15:00:54 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:23.841 [2024-11-20 15:00:54.467911] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:23.841 [2024-11-20 15:00:54.468881] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:23.841 [2024-11-20 15:00:54.469266] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:23.841 15:00:54 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:10:23.841 15:00:54 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 932b4c01-9110-4871-94bb-c1f295d0ecb7 00:10:23.841 15:00:54 -- common/autotest_common.sh@897 -- # local bdev_name=932b4c01-9110-4871-94bb-c1f295d0ecb7 00:10:23.841 15:00:54 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:23.841 15:00:54 -- common/autotest_common.sh@899 -- # local i 00:10:23.841 15:00:54 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:23.841 15:00:54 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:23.841 15:00:54 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:24.100 15:00:54 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 932b4c01-9110-4871-94bb-c1f295d0ecb7 -t 2000 00:10:24.358 [ 00:10:24.358 { 00:10:24.358 "name": "932b4c01-9110-4871-94bb-c1f295d0ecb7", 00:10:24.358 "aliases": [ 00:10:24.358 "lvs/lvol" 00:10:24.358 ], 00:10:24.358 "product_name": "Logical Volume", 00:10:24.358 "block_size": 4096, 00:10:24.358 "num_blocks": 38912, 00:10:24.358 "uuid": "932b4c01-9110-4871-94bb-c1f295d0ecb7", 00:10:24.358 "assigned_rate_limits": { 00:10:24.358 "rw_ios_per_sec": 0, 00:10:24.358 "rw_mbytes_per_sec": 0, 00:10:24.358 "r_mbytes_per_sec": 0, 00:10:24.358 "w_mbytes_per_sec": 0 00:10:24.358 }, 00:10:24.358 "claimed": false, 00:10:24.358 "zoned": false, 00:10:24.358 "supported_io_types": { 00:10:24.358 "read": true, 00:10:24.358 "write": true, 00:10:24.358 "unmap": true, 00:10:24.358 "write_zeroes": true, 00:10:24.358 "flush": false, 00:10:24.358 "reset": true, 00:10:24.358 "compare": false, 00:10:24.358 "compare_and_write": false, 00:10:24.358 "abort": false, 00:10:24.358 "nvme_admin": false, 00:10:24.358 "nvme_io": false 00:10:24.358 }, 00:10:24.358 "driver_specific": { 00:10:24.358 "lvol": { 00:10:24.358 "lvol_store_uuid": "a429ea2d-3566-4ce9-bd56-39b02c430cf7", 00:10:24.358 "base_bdev": "aio_bdev", 00:10:24.358 "thin_provision": false, 00:10:24.358 "snapshot": false, 00:10:24.358 "clone": false, 00:10:24.358 "esnap_clone": false 00:10:24.358 } 00:10:24.358 } 00:10:24.358 } 00:10:24.358 ] 00:10:24.358 15:00:55 -- common/autotest_common.sh@905 -- # return 0 00:10:24.358 15:00:55 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a429ea2d-3566-4ce9-bd56-39b02c430cf7 00:10:24.358 15:00:55 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:10:24.616 15:00:55 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:10:24.616 15:00:55 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a429ea2d-3566-4ce9-bd56-39b02c430cf7 00:10:24.616 15:00:55 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:10:24.876 15:00:55 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:10:24.876 15:00:55 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:25.444 [2024-11-20 15:00:55.957911] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:25.444 15:00:55 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a429ea2d-3566-4ce9-bd56-39b02c430cf7 00:10:25.444 15:00:55 -- common/autotest_common.sh@650 -- # local es=0 00:10:25.444 15:00:55 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a429ea2d-3566-4ce9-bd56-39b02c430cf7 00:10:25.444 15:00:55 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:25.444 15:00:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:25.444 15:00:55 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:25.444 15:00:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:25.444 15:00:56 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:25.444 15:00:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:25.444 15:00:56 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:25.444 15:00:56 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:25.444 15:00:56 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a429ea2d-3566-4ce9-bd56-39b02c430cf7 00:10:25.704 request: 00:10:25.704 { 00:10:25.704 "uuid": "a429ea2d-3566-4ce9-bd56-39b02c430cf7", 00:10:25.704 "method": "bdev_lvol_get_lvstores", 00:10:25.704 "req_id": 1 00:10:25.704 } 00:10:25.704 Got JSON-RPC error response 00:10:25.704 response: 00:10:25.704 { 00:10:25.704 "code": -19, 00:10:25.704 "message": "No such device" 00:10:25.704 } 00:10:25.704 15:00:56 -- common/autotest_common.sh@653 -- # es=1 00:10:25.704 15:00:56 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:25.704 15:00:56 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:25.704 15:00:56 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:25.704 15:00:56 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:25.963 aio_bdev 00:10:25.963 15:00:56 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 932b4c01-9110-4871-94bb-c1f295d0ecb7 00:10:25.963 15:00:56 -- common/autotest_common.sh@897 -- # local bdev_name=932b4c01-9110-4871-94bb-c1f295d0ecb7 00:10:25.963 15:00:56 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:25.963 15:00:56 -- common/autotest_common.sh@899 -- # local i 00:10:25.963 15:00:56 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:25.963 15:00:56 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:25.963 15:00:56 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:26.222 15:00:56 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 932b4c01-9110-4871-94bb-c1f295d0ecb7 -t 2000 00:10:26.479 [ 00:10:26.479 { 00:10:26.479 "name": "932b4c01-9110-4871-94bb-c1f295d0ecb7", 00:10:26.479 "aliases": [ 00:10:26.479 "lvs/lvol" 00:10:26.479 ], 00:10:26.479 "product_name": "Logical Volume", 00:10:26.479 "block_size": 4096, 00:10:26.479 "num_blocks": 38912, 00:10:26.479 "uuid": "932b4c01-9110-4871-94bb-c1f295d0ecb7", 00:10:26.479 "assigned_rate_limits": { 00:10:26.479 "rw_ios_per_sec": 0, 00:10:26.479 "rw_mbytes_per_sec": 0, 00:10:26.479 "r_mbytes_per_sec": 0, 00:10:26.479 "w_mbytes_per_sec": 0 00:10:26.479 }, 00:10:26.479 "claimed": false, 00:10:26.479 "zoned": false, 00:10:26.479 "supported_io_types": { 00:10:26.479 "read": true, 00:10:26.479 "write": true, 00:10:26.479 "unmap": true, 00:10:26.479 "write_zeroes": true, 00:10:26.479 "flush": false, 00:10:26.479 "reset": true, 00:10:26.480 "compare": false, 00:10:26.480 "compare_and_write": false, 00:10:26.480 "abort": false, 00:10:26.480 "nvme_admin": false, 00:10:26.480 "nvme_io": false 00:10:26.480 }, 00:10:26.480 "driver_specific": { 00:10:26.480 "lvol": { 00:10:26.480 "lvol_store_uuid": "a429ea2d-3566-4ce9-bd56-39b02c430cf7", 00:10:26.480 "base_bdev": "aio_bdev", 00:10:26.480 "thin_provision": false, 00:10:26.480 "snapshot": false, 00:10:26.480 "clone": false, 00:10:26.480 "esnap_clone": false 00:10:26.480 } 00:10:26.480 } 00:10:26.480 } 00:10:26.480 ] 00:10:26.480 15:00:57 -- common/autotest_common.sh@905 -- # return 0 00:10:26.480 15:00:57 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a429ea2d-3566-4ce9-bd56-39b02c430cf7 00:10:26.480 15:00:57 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:10:26.737 15:00:57 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:10:26.737 15:00:57 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a429ea2d-3566-4ce9-bd56-39b02c430cf7 00:10:26.737 15:00:57 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:10:26.996 15:00:57 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:10:26.996 15:00:57 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 932b4c01-9110-4871-94bb-c1f295d0ecb7 00:10:27.255 15:00:58 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a429ea2d-3566-4ce9-bd56-39b02c430cf7 00:10:27.823 15:00:58 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:27.823 15:00:58 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:28.400 ************************************ 00:10:28.400 END TEST lvs_grow_dirty 00:10:28.400 ************************************ 00:10:28.400 00:10:28.400 real 0m20.670s 00:10:28.400 user 0m44.854s 00:10:28.400 sys 0m7.848s 00:10:28.400 15:00:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:28.400 15:00:59 -- common/autotest_common.sh@10 -- # set +x 00:10:28.400 15:00:59 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:28.400 15:00:59 -- common/autotest_common.sh@806 -- # type=--id 00:10:28.400 15:00:59 -- common/autotest_common.sh@807 -- # id=0 00:10:28.400 15:00:59 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:10:28.400 15:00:59 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:28.400 15:00:59 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:10:28.400 15:00:59 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:10:28.400 15:00:59 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:10:28.400 15:00:59 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:28.400 nvmf_trace.0 00:10:28.400 15:00:59 -- common/autotest_common.sh@821 -- # return 0 00:10:28.400 15:00:59 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:28.400 15:00:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:28.400 15:00:59 -- nvmf/common.sh@116 -- # sync 00:10:28.686 15:00:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:28.686 15:00:59 -- nvmf/common.sh@119 -- # set +e 00:10:28.686 15:00:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:28.686 15:00:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:28.686 rmmod nvme_tcp 00:10:28.686 rmmod nvme_fabrics 00:10:28.686 rmmod nvme_keyring 00:10:28.686 15:00:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:28.686 15:00:59 -- nvmf/common.sh@123 -- # set -e 00:10:28.686 15:00:59 -- nvmf/common.sh@124 -- # return 0 00:10:28.686 15:00:59 -- nvmf/common.sh@477 -- # '[' -n 73159 ']' 00:10:28.686 15:00:59 -- nvmf/common.sh@478 -- # killprocess 73159 00:10:28.686 15:00:59 -- common/autotest_common.sh@936 -- # '[' -z 73159 ']' 00:10:28.686 15:00:59 -- common/autotest_common.sh@940 -- # kill -0 73159 00:10:28.686 15:00:59 -- common/autotest_common.sh@941 -- # uname 00:10:28.686 15:00:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:28.686 15:00:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73159 00:10:28.686 killing process with pid 73159 00:10:28.686 15:00:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:28.686 15:00:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:28.686 15:00:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73159' 00:10:28.686 15:00:59 -- common/autotest_common.sh@955 -- # kill 73159 00:10:28.686 15:00:59 -- common/autotest_common.sh@960 -- # wait 73159 00:10:28.944 15:00:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:28.944 15:00:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:28.944 15:00:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:28.944 15:00:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:28.944 15:00:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:28.944 15:00:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.944 15:00:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:28.944 15:00:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.944 15:00:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:28.944 ************************************ 00:10:28.944 END TEST nvmf_lvs_grow 00:10:28.944 ************************************ 00:10:28.944 00:10:28.944 real 0m42.215s 00:10:28.944 user 1m9.424s 00:10:28.944 sys 0m11.039s 00:10:28.944 15:00:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:28.944 15:00:59 -- common/autotest_common.sh@10 -- # set +x 00:10:28.944 15:00:59 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:28.944 15:00:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:28.944 15:00:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:28.944 15:00:59 -- common/autotest_common.sh@10 -- # set +x 00:10:28.944 ************************************ 00:10:28.944 START TEST nvmf_bdev_io_wait 00:10:28.944 ************************************ 00:10:28.944 15:00:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:28.944 * Looking for test storage... 00:10:28.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:28.944 15:00:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:28.944 15:00:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:28.944 15:00:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:29.203 15:00:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:29.203 15:00:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:29.203 15:00:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:29.203 15:00:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:29.203 15:00:59 -- scripts/common.sh@335 -- # IFS=.-: 00:10:29.203 15:00:59 -- scripts/common.sh@335 -- # read -ra ver1 00:10:29.203 15:00:59 -- scripts/common.sh@336 -- # IFS=.-: 00:10:29.203 15:00:59 -- scripts/common.sh@336 -- # read -ra ver2 00:10:29.203 15:00:59 -- scripts/common.sh@337 -- # local 'op=<' 00:10:29.203 15:00:59 -- scripts/common.sh@339 -- # ver1_l=2 00:10:29.203 15:00:59 -- scripts/common.sh@340 -- # ver2_l=1 00:10:29.203 15:00:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:29.203 15:00:59 -- scripts/common.sh@343 -- # case "$op" in 00:10:29.203 15:00:59 -- scripts/common.sh@344 -- # : 1 00:10:29.203 15:00:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:29.203 15:00:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:29.203 15:00:59 -- scripts/common.sh@364 -- # decimal 1 00:10:29.203 15:00:59 -- scripts/common.sh@352 -- # local d=1 00:10:29.203 15:00:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:29.203 15:00:59 -- scripts/common.sh@354 -- # echo 1 00:10:29.203 15:00:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:29.203 15:00:59 -- scripts/common.sh@365 -- # decimal 2 00:10:29.203 15:00:59 -- scripts/common.sh@352 -- # local d=2 00:10:29.203 15:00:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:29.203 15:00:59 -- scripts/common.sh@354 -- # echo 2 00:10:29.203 15:00:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:29.203 15:00:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:29.203 15:00:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:29.203 15:00:59 -- scripts/common.sh@367 -- # return 0 00:10:29.203 15:00:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:29.203 15:00:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:29.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.203 --rc genhtml_branch_coverage=1 00:10:29.203 --rc genhtml_function_coverage=1 00:10:29.203 --rc genhtml_legend=1 00:10:29.203 --rc geninfo_all_blocks=1 00:10:29.203 --rc geninfo_unexecuted_blocks=1 00:10:29.203 00:10:29.203 ' 00:10:29.203 15:00:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:29.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.203 --rc genhtml_branch_coverage=1 00:10:29.203 --rc genhtml_function_coverage=1 00:10:29.203 --rc genhtml_legend=1 00:10:29.203 --rc geninfo_all_blocks=1 00:10:29.204 --rc geninfo_unexecuted_blocks=1 00:10:29.204 00:10:29.204 ' 00:10:29.204 15:00:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:29.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.204 --rc genhtml_branch_coverage=1 00:10:29.204 --rc genhtml_function_coverage=1 00:10:29.204 --rc genhtml_legend=1 00:10:29.204 --rc geninfo_all_blocks=1 00:10:29.204 --rc geninfo_unexecuted_blocks=1 00:10:29.204 00:10:29.204 ' 00:10:29.204 15:00:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:29.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:29.204 --rc genhtml_branch_coverage=1 00:10:29.204 --rc genhtml_function_coverage=1 00:10:29.204 --rc genhtml_legend=1 00:10:29.204 --rc geninfo_all_blocks=1 00:10:29.204 --rc geninfo_unexecuted_blocks=1 00:10:29.204 00:10:29.204 ' 00:10:29.204 15:00:59 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:29.204 15:00:59 -- nvmf/common.sh@7 -- # uname -s 00:10:29.204 15:00:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.204 15:00:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.204 15:00:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.204 15:00:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.204 15:00:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.204 15:00:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.204 15:00:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.204 15:00:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.204 15:00:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.204 15:00:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.204 15:00:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:10:29.204 15:00:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:10:29.204 15:00:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.204 15:00:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.204 15:00:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:29.204 15:00:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:29.204 15:00:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.204 15:00:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.204 15:00:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.204 15:00:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.204 15:00:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.204 15:00:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.204 15:00:59 -- paths/export.sh@5 -- # export PATH 00:10:29.204 15:00:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.204 15:00:59 -- nvmf/common.sh@46 -- # : 0 00:10:29.204 15:00:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:29.204 15:00:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:29.204 15:00:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:29.204 15:00:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.204 15:00:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.204 15:00:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:29.204 15:00:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:29.204 15:00:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:29.204 15:00:59 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:29.204 15:00:59 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:29.204 15:00:59 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:29.204 15:00:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:29.204 15:00:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:29.204 15:00:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:29.204 15:00:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:29.204 15:00:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:29.204 15:00:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.204 15:00:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:29.204 15:00:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.204 15:00:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:29.204 15:00:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:29.204 15:00:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:29.204 15:00:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:29.204 15:00:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:29.204 15:00:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:29.204 15:00:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:29.204 15:00:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:29.204 15:00:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:29.204 15:00:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:29.204 15:00:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:29.204 15:00:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:29.204 15:00:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:29.204 15:00:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:29.204 15:00:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:29.204 15:00:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:29.204 15:00:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:29.204 15:00:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:29.204 15:00:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:29.204 15:00:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:29.204 Cannot find device "nvmf_tgt_br" 00:10:29.204 15:00:59 -- nvmf/common.sh@154 -- # true 00:10:29.204 15:00:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:29.204 Cannot find device "nvmf_tgt_br2" 00:10:29.204 15:00:59 -- nvmf/common.sh@155 -- # true 00:10:29.204 15:00:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:29.204 15:00:59 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:29.204 Cannot find device "nvmf_tgt_br" 00:10:29.204 15:00:59 -- nvmf/common.sh@157 -- # true 00:10:29.204 15:00:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:29.204 Cannot find device "nvmf_tgt_br2" 00:10:29.204 15:00:59 -- nvmf/common.sh@158 -- # true 00:10:29.205 15:00:59 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:29.205 15:00:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:29.205 15:01:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:29.463 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:29.463 15:01:00 -- nvmf/common.sh@161 -- # true 00:10:29.463 15:01:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:29.463 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:29.463 15:01:00 -- nvmf/common.sh@162 -- # true 00:10:29.463 15:01:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:29.463 15:01:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:29.463 15:01:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:29.463 15:01:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:29.463 15:01:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:29.463 15:01:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:29.463 15:01:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:29.463 15:01:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:29.463 15:01:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:29.463 15:01:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:29.463 15:01:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:29.463 15:01:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:29.463 15:01:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:29.463 15:01:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:29.463 15:01:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:29.463 15:01:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:29.463 15:01:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:29.463 15:01:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:29.464 15:01:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:29.464 15:01:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:29.464 15:01:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:29.464 15:01:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:29.464 15:01:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:29.464 15:01:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:29.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:29.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:10:29.464 00:10:29.464 --- 10.0.0.2 ping statistics --- 00:10:29.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.464 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:29.464 15:01:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:29.464 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:29.464 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:10:29.464 00:10:29.464 --- 10.0.0.3 ping statistics --- 00:10:29.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.464 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:29.464 15:01:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:29.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:29.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:29.464 00:10:29.464 --- 10.0.0.1 ping statistics --- 00:10:29.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.464 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:29.464 15:01:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:29.464 15:01:00 -- nvmf/common.sh@421 -- # return 0 00:10:29.464 15:01:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:29.464 15:01:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:29.464 15:01:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:29.464 15:01:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:29.464 15:01:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:29.464 15:01:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:29.464 15:01:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:29.464 15:01:00 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:29.464 15:01:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:29.464 15:01:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:29.464 15:01:00 -- common/autotest_common.sh@10 -- # set +x 00:10:29.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.464 15:01:00 -- nvmf/common.sh@469 -- # nvmfpid=73472 00:10:29.464 15:01:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:29.464 15:01:00 -- nvmf/common.sh@470 -- # waitforlisten 73472 00:10:29.464 15:01:00 -- common/autotest_common.sh@829 -- # '[' -z 73472 ']' 00:10:29.464 15:01:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.464 15:01:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:29.464 15:01:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.464 15:01:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:29.464 15:01:00 -- common/autotest_common.sh@10 -- # set +x 00:10:29.722 [2024-11-20 15:01:00.270520] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:29.722 [2024-11-20 15:01:00.270635] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.722 [2024-11-20 15:01:00.409179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:29.722 [2024-11-20 15:01:00.449893] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:29.722 [2024-11-20 15:01:00.450059] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.722 [2024-11-20 15:01:00.450076] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.722 [2024-11-20 15:01:00.450086] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.722 [2024-11-20 15:01:00.450243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.722 [2024-11-20 15:01:00.451495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.722 [2024-11-20 15:01:00.451689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:29.722 [2024-11-20 15:01:00.451697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.722 15:01:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:29.722 15:01:00 -- common/autotest_common.sh@862 -- # return 0 00:10:29.722 15:01:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:29.722 15:01:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:29.722 15:01:00 -- common/autotest_common.sh@10 -- # set +x 00:10:29.981 15:01:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.981 15:01:00 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:29.981 15:01:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.981 15:01:00 -- common/autotest_common.sh@10 -- # set +x 00:10:29.981 15:01:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.981 15:01:00 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:29.981 15:01:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.981 15:01:00 -- common/autotest_common.sh@10 -- # set +x 00:10:29.981 15:01:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.981 15:01:00 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:29.981 15:01:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.981 15:01:00 -- common/autotest_common.sh@10 -- # set +x 00:10:29.981 [2024-11-20 15:01:00.620757] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:29.981 15:01:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.981 15:01:00 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:29.981 15:01:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.981 15:01:00 -- common/autotest_common.sh@10 -- # set +x 00:10:29.981 Malloc0 00:10:29.981 15:01:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.981 15:01:00 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:29.981 15:01:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.981 15:01:00 -- common/autotest_common.sh@10 -- # set +x 00:10:29.981 15:01:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.981 15:01:00 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:29.981 15:01:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.981 15:01:00 -- common/autotest_common.sh@10 -- # set +x 00:10:29.981 15:01:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.981 15:01:00 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:29.981 15:01:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.981 15:01:00 -- common/autotest_common.sh@10 -- # set +x 00:10:29.981 [2024-11-20 15:01:00.684524] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:29.981 15:01:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.981 15:01:00 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=73505 00:10:29.982 15:01:00 -- target/bdev_io_wait.sh@30 -- # READ_PID=73507 00:10:29.982 15:01:00 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:29.982 15:01:00 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:29.982 15:01:00 -- nvmf/common.sh@520 -- # config=() 00:10:29.982 15:01:00 -- nvmf/common.sh@520 -- # local subsystem config 00:10:29.982 15:01:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:29.982 15:01:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:29.982 { 00:10:29.982 "params": { 00:10:29.982 "name": "Nvme$subsystem", 00:10:29.982 "trtype": "$TEST_TRANSPORT", 00:10:29.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:29.982 "adrfam": "ipv4", 00:10:29.982 "trsvcid": "$NVMF_PORT", 00:10:29.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:29.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:29.982 "hdgst": ${hdgst:-false}, 00:10:29.982 "ddgst": ${ddgst:-false} 00:10:29.982 }, 00:10:29.982 "method": "bdev_nvme_attach_controller" 00:10:29.982 } 00:10:29.982 EOF 00:10:29.982 )") 00:10:29.982 15:01:00 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=73509 00:10:29.982 15:01:00 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:29.982 15:01:00 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:29.982 15:01:00 -- nvmf/common.sh@520 -- # config=() 00:10:29.982 15:01:00 -- nvmf/common.sh@520 -- # local subsystem config 00:10:29.982 15:01:00 -- nvmf/common.sh@542 -- # cat 00:10:29.982 15:01:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:29.982 15:01:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:29.982 { 00:10:29.982 "params": { 00:10:29.982 "name": "Nvme$subsystem", 00:10:29.982 "trtype": "$TEST_TRANSPORT", 00:10:29.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:29.982 "adrfam": "ipv4", 00:10:29.982 "trsvcid": "$NVMF_PORT", 00:10:29.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:29.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:29.982 "hdgst": ${hdgst:-false}, 00:10:29.982 "ddgst": ${ddgst:-false} 00:10:29.982 }, 00:10:29.982 "method": "bdev_nvme_attach_controller" 00:10:29.982 } 00:10:29.982 EOF 00:10:29.982 )") 00:10:29.982 15:01:00 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:29.982 15:01:00 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:29.982 15:01:00 -- nvmf/common.sh@520 -- # config=() 00:10:29.982 15:01:00 -- nvmf/common.sh@520 -- # local subsystem config 00:10:29.982 15:01:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:29.982 15:01:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:29.982 { 00:10:29.982 "params": { 00:10:29.982 "name": "Nvme$subsystem", 00:10:29.982 "trtype": "$TEST_TRANSPORT", 00:10:29.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:29.982 "adrfam": "ipv4", 00:10:29.982 "trsvcid": "$NVMF_PORT", 00:10:29.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:29.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:29.982 "hdgst": ${hdgst:-false}, 00:10:29.982 "ddgst": ${ddgst:-false} 00:10:29.982 }, 00:10:29.982 "method": "bdev_nvme_attach_controller" 00:10:29.982 } 00:10:29.982 EOF 00:10:29.982 )") 00:10:29.982 15:01:00 -- nvmf/common.sh@542 -- # cat 00:10:29.982 15:01:00 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:29.982 15:01:00 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:29.982 15:01:00 -- nvmf/common.sh@544 -- # jq . 00:10:29.982 15:01:00 -- nvmf/common.sh@520 -- # config=() 00:10:29.982 15:01:00 -- nvmf/common.sh@520 -- # local subsystem config 00:10:29.982 15:01:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:29.982 15:01:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:29.982 { 00:10:29.982 "params": { 00:10:29.982 "name": "Nvme$subsystem", 00:10:29.982 "trtype": "$TEST_TRANSPORT", 00:10:29.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:29.982 "adrfam": "ipv4", 00:10:29.982 "trsvcid": "$NVMF_PORT", 00:10:29.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:29.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:29.982 "hdgst": ${hdgst:-false}, 00:10:29.982 "ddgst": ${ddgst:-false} 00:10:29.982 }, 00:10:29.982 "method": "bdev_nvme_attach_controller" 00:10:29.982 } 00:10:29.982 EOF 00:10:29.982 )") 00:10:29.982 15:01:00 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=73511 00:10:29.982 15:01:00 -- target/bdev_io_wait.sh@35 -- # sync 00:10:29.982 15:01:00 -- nvmf/common.sh@544 -- # jq . 00:10:29.982 15:01:00 -- nvmf/common.sh@542 -- # cat 00:10:29.982 15:01:00 -- nvmf/common.sh@545 -- # IFS=, 00:10:29.982 15:01:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:29.982 "params": { 00:10:29.982 "name": "Nvme1", 00:10:29.982 "trtype": "tcp", 00:10:29.982 "traddr": "10.0.0.2", 00:10:29.982 "adrfam": "ipv4", 00:10:29.982 "trsvcid": "4420", 00:10:29.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:29.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:29.982 "hdgst": false, 00:10:29.982 "ddgst": false 00:10:29.982 }, 00:10:29.982 "method": "bdev_nvme_attach_controller" 00:10:29.982 }' 00:10:29.982 15:01:00 -- nvmf/common.sh@542 -- # cat 00:10:29.982 15:01:00 -- nvmf/common.sh@545 -- # IFS=, 00:10:29.982 15:01:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:29.982 "params": { 00:10:29.982 "name": "Nvme1", 00:10:29.982 "trtype": "tcp", 00:10:29.982 "traddr": "10.0.0.2", 00:10:29.982 "adrfam": "ipv4", 00:10:29.982 "trsvcid": "4420", 00:10:29.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:29.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:29.982 "hdgst": false, 00:10:29.982 "ddgst": false 00:10:29.982 }, 00:10:29.982 "method": "bdev_nvme_attach_controller" 00:10:29.982 }' 00:10:29.982 15:01:00 -- nvmf/common.sh@544 -- # jq . 00:10:29.982 15:01:00 -- nvmf/common.sh@544 -- # jq . 00:10:29.982 15:01:00 -- nvmf/common.sh@545 -- # IFS=, 00:10:29.982 15:01:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:29.982 "params": { 00:10:29.982 "name": "Nvme1", 00:10:29.982 "trtype": "tcp", 00:10:29.982 "traddr": "10.0.0.2", 00:10:29.982 "adrfam": "ipv4", 00:10:29.982 "trsvcid": "4420", 00:10:29.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:29.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:29.982 "hdgst": false, 00:10:29.982 "ddgst": false 00:10:29.982 }, 00:10:29.982 "method": "bdev_nvme_attach_controller" 00:10:29.982 }' 00:10:29.982 15:01:00 -- nvmf/common.sh@545 -- # IFS=, 00:10:29.982 15:01:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:29.982 "params": { 00:10:29.982 "name": "Nvme1", 00:10:29.982 "trtype": "tcp", 00:10:29.982 "traddr": "10.0.0.2", 00:10:29.982 "adrfam": "ipv4", 00:10:29.982 "trsvcid": "4420", 00:10:29.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:29.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:29.982 "hdgst": false, 00:10:29.982 "ddgst": false 00:10:29.982 }, 00:10:29.982 "method": "bdev_nvme_attach_controller" 00:10:29.982 }' 00:10:29.982 [2024-11-20 15:01:00.743003] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:29.982 [2024-11-20 15:01:00.743286] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:29.982 [2024-11-20 15:01:00.744463] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:29.982 [2024-11-20 15:01:00.744710] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:29.982 [2024-11-20 15:01:00.759439] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:29.982 [2024-11-20 15:01:00.763699] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:29.982 [2024-11-20 15:01:00.764492] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:29.983 [2024-11-20 15:01:00.764559] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:29.983 15:01:00 -- target/bdev_io_wait.sh@37 -- # wait 73505 00:10:30.241 [2024-11-20 15:01:00.919047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.241 [2024-11-20 15:01:00.944120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:10:30.241 [2024-11-20 15:01:00.970875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.241 [2024-11-20 15:01:00.995929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:10:30.241 [2024-11-20 15:01:01.007617] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.241 [2024-11-20 15:01:01.028855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:10:30.500 Running I/O for 1 seconds... 00:10:30.500 [2024-11-20 15:01:01.072006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.500 [2024-11-20 15:01:01.099800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:30.500 Running I/O for 1 seconds... 00:10:30.500 Running I/O for 1 seconds... 00:10:30.500 Running I/O for 1 seconds... 00:10:31.545 00:10:31.545 Latency(us) 00:10:31.545 [2024-11-20T15:01:02.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:31.545 [2024-11-20T15:01:02.349Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:31.545 Nvme1n1 : 1.00 162990.48 636.68 0.00 0.00 782.54 340.71 1109.64 00:10:31.545 [2024-11-20T15:01:02.349Z] =================================================================================================================== 00:10:31.545 [2024-11-20T15:01:02.349Z] Total : 162990.48 636.68 0.00 0.00 782.54 340.71 1109.64 00:10:31.545 00:10:31.545 Latency(us) 00:10:31.545 [2024-11-20T15:01:02.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:31.545 [2024-11-20T15:01:02.349Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:31.545 Nvme1n1 : 1.01 11551.47 45.12 0.00 0.00 11043.87 5600.35 19303.33 00:10:31.545 [2024-11-20T15:01:02.349Z] =================================================================================================================== 00:10:31.545 [2024-11-20T15:01:02.349Z] Total : 11551.47 45.12 0.00 0.00 11043.87 5600.35 19303.33 00:10:31.545 00:10:31.545 Latency(us) 00:10:31.545 [2024-11-20T15:01:02.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:31.545 [2024-11-20T15:01:02.349Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:31.545 Nvme1n1 : 1.01 7604.58 29.71 0.00 0.00 16740.98 10187.87 40751.48 00:10:31.545 [2024-11-20T15:01:02.349Z] =================================================================================================================== 00:10:31.545 [2024-11-20T15:01:02.349Z] Total : 7604.58 29.71 0.00 0.00 16740.98 10187.87 40751.48 00:10:31.545 00:10:31.545 Latency(us) 00:10:31.545 [2024-11-20T15:01:02.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:31.545 [2024-11-20T15:01:02.349Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:31.545 Nvme1n1 : 1.01 8372.28 32.70 0.00 0.00 15222.33 6196.13 27167.65 00:10:31.545 [2024-11-20T15:01:02.349Z] =================================================================================================================== 00:10:31.545 [2024-11-20T15:01:02.349Z] Total : 8372.28 32.70 0.00 0.00 15222.33 6196.13 27167.65 00:10:31.804 15:01:02 -- target/bdev_io_wait.sh@38 -- # wait 73507 00:10:31.804 15:01:02 -- target/bdev_io_wait.sh@39 -- # wait 73509 00:10:31.804 15:01:02 -- target/bdev_io_wait.sh@40 -- # wait 73511 00:10:31.804 15:01:02 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:31.804 15:01:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.804 15:01:02 -- common/autotest_common.sh@10 -- # set +x 00:10:31.804 15:01:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.804 15:01:02 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:31.804 15:01:02 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:31.804 15:01:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:31.804 15:01:02 -- nvmf/common.sh@116 -- # sync 00:10:31.804 15:01:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:31.804 15:01:02 -- nvmf/common.sh@119 -- # set +e 00:10:31.804 15:01:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:31.804 15:01:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:31.804 rmmod nvme_tcp 00:10:31.804 rmmod nvme_fabrics 00:10:31.804 rmmod nvme_keyring 00:10:31.804 15:01:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:31.804 15:01:02 -- nvmf/common.sh@123 -- # set -e 00:10:31.804 15:01:02 -- nvmf/common.sh@124 -- # return 0 00:10:31.804 15:01:02 -- nvmf/common.sh@477 -- # '[' -n 73472 ']' 00:10:31.804 15:01:02 -- nvmf/common.sh@478 -- # killprocess 73472 00:10:31.804 15:01:02 -- common/autotest_common.sh@936 -- # '[' -z 73472 ']' 00:10:31.804 15:01:02 -- common/autotest_common.sh@940 -- # kill -0 73472 00:10:31.804 15:01:02 -- common/autotest_common.sh@941 -- # uname 00:10:31.804 15:01:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:31.804 15:01:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73472 00:10:31.804 15:01:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:31.804 15:01:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:31.804 killing process with pid 73472 00:10:31.804 15:01:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73472' 00:10:31.804 15:01:02 -- common/autotest_common.sh@955 -- # kill 73472 00:10:31.804 15:01:02 -- common/autotest_common.sh@960 -- # wait 73472 00:10:32.062 15:01:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:32.062 15:01:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:32.062 15:01:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:32.062 15:01:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:32.062 15:01:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:32.062 15:01:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.062 15:01:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:32.062 15:01:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.062 15:01:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:32.062 00:10:32.062 real 0m3.105s 00:10:32.062 user 0m13.096s 00:10:32.062 sys 0m1.979s 00:10:32.062 15:01:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:32.063 15:01:02 -- common/autotest_common.sh@10 -- # set +x 00:10:32.063 ************************************ 00:10:32.063 END TEST nvmf_bdev_io_wait 00:10:32.063 ************************************ 00:10:32.063 15:01:02 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:32.063 15:01:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:32.063 15:01:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:32.063 15:01:02 -- common/autotest_common.sh@10 -- # set +x 00:10:32.063 ************************************ 00:10:32.063 START TEST nvmf_queue_depth 00:10:32.063 ************************************ 00:10:32.063 15:01:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:32.063 * Looking for test storage... 00:10:32.063 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:32.063 15:01:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:32.063 15:01:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:32.063 15:01:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:32.322 15:01:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:32.322 15:01:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:32.322 15:01:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:32.322 15:01:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:32.322 15:01:02 -- scripts/common.sh@335 -- # IFS=.-: 00:10:32.322 15:01:02 -- scripts/common.sh@335 -- # read -ra ver1 00:10:32.322 15:01:02 -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.322 15:01:02 -- scripts/common.sh@336 -- # read -ra ver2 00:10:32.322 15:01:02 -- scripts/common.sh@337 -- # local 'op=<' 00:10:32.322 15:01:02 -- scripts/common.sh@339 -- # ver1_l=2 00:10:32.322 15:01:02 -- scripts/common.sh@340 -- # ver2_l=1 00:10:32.322 15:01:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:32.322 15:01:02 -- scripts/common.sh@343 -- # case "$op" in 00:10:32.322 15:01:02 -- scripts/common.sh@344 -- # : 1 00:10:32.322 15:01:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:32.322 15:01:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.322 15:01:02 -- scripts/common.sh@364 -- # decimal 1 00:10:32.322 15:01:02 -- scripts/common.sh@352 -- # local d=1 00:10:32.322 15:01:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.322 15:01:02 -- scripts/common.sh@354 -- # echo 1 00:10:32.322 15:01:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:32.322 15:01:02 -- scripts/common.sh@365 -- # decimal 2 00:10:32.322 15:01:02 -- scripts/common.sh@352 -- # local d=2 00:10:32.322 15:01:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.322 15:01:02 -- scripts/common.sh@354 -- # echo 2 00:10:32.322 15:01:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:32.322 15:01:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:32.322 15:01:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:32.322 15:01:02 -- scripts/common.sh@367 -- # return 0 00:10:32.322 15:01:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.322 15:01:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:32.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.322 --rc genhtml_branch_coverage=1 00:10:32.322 --rc genhtml_function_coverage=1 00:10:32.322 --rc genhtml_legend=1 00:10:32.322 --rc geninfo_all_blocks=1 00:10:32.322 --rc geninfo_unexecuted_blocks=1 00:10:32.322 00:10:32.322 ' 00:10:32.322 15:01:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:32.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.322 --rc genhtml_branch_coverage=1 00:10:32.322 --rc genhtml_function_coverage=1 00:10:32.322 --rc genhtml_legend=1 00:10:32.322 --rc geninfo_all_blocks=1 00:10:32.322 --rc geninfo_unexecuted_blocks=1 00:10:32.322 00:10:32.322 ' 00:10:32.322 15:01:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:32.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.322 --rc genhtml_branch_coverage=1 00:10:32.322 --rc genhtml_function_coverage=1 00:10:32.322 --rc genhtml_legend=1 00:10:32.322 --rc geninfo_all_blocks=1 00:10:32.322 --rc geninfo_unexecuted_blocks=1 00:10:32.322 00:10:32.322 ' 00:10:32.322 15:01:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:32.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.322 --rc genhtml_branch_coverage=1 00:10:32.322 --rc genhtml_function_coverage=1 00:10:32.322 --rc genhtml_legend=1 00:10:32.322 --rc geninfo_all_blocks=1 00:10:32.322 --rc geninfo_unexecuted_blocks=1 00:10:32.322 00:10:32.322 ' 00:10:32.322 15:01:02 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:32.322 15:01:02 -- nvmf/common.sh@7 -- # uname -s 00:10:32.322 15:01:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.322 15:01:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.322 15:01:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.322 15:01:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.322 15:01:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.322 15:01:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.322 15:01:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.322 15:01:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.322 15:01:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.322 15:01:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.322 15:01:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:10:32.322 15:01:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:10:32.322 15:01:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.322 15:01:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.322 15:01:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:32.322 15:01:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:32.322 15:01:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.322 15:01:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.322 15:01:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.322 15:01:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.322 15:01:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.322 15:01:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.322 15:01:02 -- paths/export.sh@5 -- # export PATH 00:10:32.322 15:01:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.322 15:01:02 -- nvmf/common.sh@46 -- # : 0 00:10:32.322 15:01:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:32.322 15:01:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:32.322 15:01:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:32.322 15:01:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.322 15:01:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.322 15:01:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:32.322 15:01:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:32.322 15:01:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:32.322 15:01:02 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:32.322 15:01:02 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:32.322 15:01:02 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:32.322 15:01:02 -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:32.322 15:01:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:32.322 15:01:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.322 15:01:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:32.322 15:01:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:32.322 15:01:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:32.322 15:01:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.322 15:01:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:32.322 15:01:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.322 15:01:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:32.322 15:01:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:32.322 15:01:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:32.322 15:01:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:32.322 15:01:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:32.322 15:01:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:32.322 15:01:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.322 15:01:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.322 15:01:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:32.322 15:01:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:32.322 15:01:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:32.322 15:01:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:32.322 15:01:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:32.322 15:01:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.323 15:01:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:32.323 15:01:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:32.323 15:01:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:32.323 15:01:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:32.323 15:01:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:32.323 15:01:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:32.323 Cannot find device "nvmf_tgt_br" 00:10:32.323 15:01:03 -- nvmf/common.sh@154 -- # true 00:10:32.323 15:01:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:32.323 Cannot find device "nvmf_tgt_br2" 00:10:32.323 15:01:03 -- nvmf/common.sh@155 -- # true 00:10:32.323 15:01:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:32.323 15:01:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:32.323 Cannot find device "nvmf_tgt_br" 00:10:32.323 15:01:03 -- nvmf/common.sh@157 -- # true 00:10:32.323 15:01:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:32.323 Cannot find device "nvmf_tgt_br2" 00:10:32.323 15:01:03 -- nvmf/common.sh@158 -- # true 00:10:32.323 15:01:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:32.323 15:01:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:32.323 15:01:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:32.582 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:32.582 15:01:03 -- nvmf/common.sh@161 -- # true 00:10:32.582 15:01:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:32.582 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:32.582 15:01:03 -- nvmf/common.sh@162 -- # true 00:10:32.582 15:01:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:32.582 15:01:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:32.582 15:01:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:32.582 15:01:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:32.582 15:01:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:32.582 15:01:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:32.582 15:01:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:32.582 15:01:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:32.582 15:01:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:32.582 15:01:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:32.582 15:01:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:32.582 15:01:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:32.582 15:01:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:32.582 15:01:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:32.582 15:01:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:32.582 15:01:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:32.582 15:01:03 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:32.582 15:01:03 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:32.582 15:01:03 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:32.582 15:01:03 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:32.582 15:01:03 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:32.582 15:01:03 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:32.582 15:01:03 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:32.582 15:01:03 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:32.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:10:32.582 00:10:32.582 --- 10.0.0.2 ping statistics --- 00:10:32.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.582 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:10:32.582 15:01:03 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:32.582 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:32.582 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:10:32.582 00:10:32.582 --- 10.0.0.3 ping statistics --- 00:10:32.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.582 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:10:32.582 15:01:03 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:32.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:32.582 00:10:32.582 --- 10.0.0.1 ping statistics --- 00:10:32.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.582 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:32.582 15:01:03 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.582 15:01:03 -- nvmf/common.sh@421 -- # return 0 00:10:32.582 15:01:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:32.582 15:01:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.582 15:01:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:32.582 15:01:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:32.582 15:01:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.582 15:01:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:32.582 15:01:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:32.582 15:01:03 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:32.582 15:01:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:32.582 15:01:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:32.582 15:01:03 -- common/autotest_common.sh@10 -- # set +x 00:10:32.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.582 15:01:03 -- nvmf/common.sh@469 -- # nvmfpid=73717 00:10:32.582 15:01:03 -- nvmf/common.sh@470 -- # waitforlisten 73717 00:10:32.583 15:01:03 -- common/autotest_common.sh@829 -- # '[' -z 73717 ']' 00:10:32.583 15:01:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:32.583 15:01:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.583 15:01:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:32.583 15:01:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.583 15:01:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:32.583 15:01:03 -- common/autotest_common.sh@10 -- # set +x 00:10:32.842 [2024-11-20 15:01:03.410315] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:32.842 [2024-11-20 15:01:03.410571] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.842 [2024-11-20 15:01:03.545186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.842 [2024-11-20 15:01:03.579935] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:32.842 [2024-11-20 15:01:03.580236] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.842 [2024-11-20 15:01:03.580547] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.842 [2024-11-20 15:01:03.580932] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.842 [2024-11-20 15:01:03.581143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.779 15:01:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:33.779 15:01:04 -- common/autotest_common.sh@862 -- # return 0 00:10:33.779 15:01:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:33.779 15:01:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:33.779 15:01:04 -- common/autotest_common.sh@10 -- # set +x 00:10:33.779 15:01:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.779 15:01:04 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:33.779 15:01:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.779 15:01:04 -- common/autotest_common.sh@10 -- # set +x 00:10:33.779 [2024-11-20 15:01:04.448962] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.779 15:01:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.779 15:01:04 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:33.779 15:01:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.779 15:01:04 -- common/autotest_common.sh@10 -- # set +x 00:10:33.779 Malloc0 00:10:33.779 15:01:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.779 15:01:04 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:33.779 15:01:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.779 15:01:04 -- common/autotest_common.sh@10 -- # set +x 00:10:33.779 15:01:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.779 15:01:04 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:33.779 15:01:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.779 15:01:04 -- common/autotest_common.sh@10 -- # set +x 00:10:33.779 15:01:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.779 15:01:04 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:33.779 15:01:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.779 15:01:04 -- common/autotest_common.sh@10 -- # set +x 00:10:33.779 [2024-11-20 15:01:04.519312] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:33.779 15:01:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.779 15:01:04 -- target/queue_depth.sh@30 -- # bdevperf_pid=73755 00:10:33.779 15:01:04 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:33.779 15:01:04 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:33.779 15:01:04 -- target/queue_depth.sh@33 -- # waitforlisten 73755 /var/tmp/bdevperf.sock 00:10:33.779 15:01:04 -- common/autotest_common.sh@829 -- # '[' -z 73755 ']' 00:10:33.779 15:01:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:33.779 15:01:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:33.779 15:01:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:33.779 15:01:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:33.779 15:01:04 -- common/autotest_common.sh@10 -- # set +x 00:10:33.779 [2024-11-20 15:01:04.573757] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:33.779 [2024-11-20 15:01:04.574054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73755 ] 00:10:34.038 [2024-11-20 15:01:04.713112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.038 [2024-11-20 15:01:04.749010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.973 15:01:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:34.973 15:01:05 -- common/autotest_common.sh@862 -- # return 0 00:10:34.973 15:01:05 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:34.973 15:01:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.973 15:01:05 -- common/autotest_common.sh@10 -- # set +x 00:10:34.973 NVMe0n1 00:10:34.973 15:01:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.973 15:01:05 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:34.973 Running I/O for 10 seconds... 00:10:47.181 00:10:47.182 Latency(us) 00:10:47.182 [2024-11-20T15:01:17.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:47.182 [2024-11-20T15:01:17.986Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:47.182 Verification LBA range: start 0x0 length 0x4000 00:10:47.182 NVMe0n1 : 10.07 13434.70 52.48 0.00 0.00 75906.81 15371.17 57195.05 00:10:47.182 [2024-11-20T15:01:17.986Z] =================================================================================================================== 00:10:47.182 [2024-11-20T15:01:17.986Z] Total : 13434.70 52.48 0.00 0.00 75906.81 15371.17 57195.05 00:10:47.182 0 00:10:47.182 15:01:15 -- target/queue_depth.sh@39 -- # killprocess 73755 00:10:47.182 15:01:15 -- common/autotest_common.sh@936 -- # '[' -z 73755 ']' 00:10:47.182 15:01:15 -- common/autotest_common.sh@940 -- # kill -0 73755 00:10:47.182 15:01:15 -- common/autotest_common.sh@941 -- # uname 00:10:47.182 15:01:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:47.182 15:01:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73755 00:10:47.182 killing process with pid 73755 00:10:47.182 Received shutdown signal, test time was about 10.000000 seconds 00:10:47.182 00:10:47.182 Latency(us) 00:10:47.182 [2024-11-20T15:01:17.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:47.182 [2024-11-20T15:01:17.986Z] =================================================================================================================== 00:10:47.182 [2024-11-20T15:01:17.986Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:47.182 15:01:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:47.182 15:01:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:47.182 15:01:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73755' 00:10:47.182 15:01:15 -- common/autotest_common.sh@955 -- # kill 73755 00:10:47.182 15:01:15 -- common/autotest_common.sh@960 -- # wait 73755 00:10:47.182 15:01:16 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:47.182 15:01:16 -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:47.182 15:01:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:47.182 15:01:16 -- nvmf/common.sh@116 -- # sync 00:10:47.182 15:01:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:47.182 15:01:16 -- nvmf/common.sh@119 -- # set +e 00:10:47.182 15:01:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:47.182 15:01:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:47.182 rmmod nvme_tcp 00:10:47.182 rmmod nvme_fabrics 00:10:47.182 rmmod nvme_keyring 00:10:47.182 15:01:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:47.182 15:01:16 -- nvmf/common.sh@123 -- # set -e 00:10:47.182 15:01:16 -- nvmf/common.sh@124 -- # return 0 00:10:47.182 15:01:16 -- nvmf/common.sh@477 -- # '[' -n 73717 ']' 00:10:47.182 15:01:16 -- nvmf/common.sh@478 -- # killprocess 73717 00:10:47.182 15:01:16 -- common/autotest_common.sh@936 -- # '[' -z 73717 ']' 00:10:47.182 15:01:16 -- common/autotest_common.sh@940 -- # kill -0 73717 00:10:47.182 15:01:16 -- common/autotest_common.sh@941 -- # uname 00:10:47.182 15:01:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:47.182 15:01:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73717 00:10:47.182 killing process with pid 73717 00:10:47.182 15:01:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:47.182 15:01:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:47.182 15:01:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73717' 00:10:47.182 15:01:16 -- common/autotest_common.sh@955 -- # kill 73717 00:10:47.182 15:01:16 -- common/autotest_common.sh@960 -- # wait 73717 00:10:47.182 15:01:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:47.182 15:01:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:47.182 15:01:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:47.182 15:01:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:47.182 15:01:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:47.182 15:01:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.182 15:01:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:47.182 15:01:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.182 15:01:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:47.182 ************************************ 00:10:47.182 END TEST nvmf_queue_depth 00:10:47.182 ************************************ 00:10:47.182 00:10:47.182 real 0m13.597s 00:10:47.182 user 0m23.808s 00:10:47.182 sys 0m1.870s 00:10:47.182 15:01:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:47.182 15:01:16 -- common/autotest_common.sh@10 -- # set +x 00:10:47.182 15:01:16 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:47.182 15:01:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:47.182 15:01:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:47.182 15:01:16 -- common/autotest_common.sh@10 -- # set +x 00:10:47.182 ************************************ 00:10:47.182 START TEST nvmf_multipath 00:10:47.182 ************************************ 00:10:47.182 15:01:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:47.182 * Looking for test storage... 00:10:47.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:47.182 15:01:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:47.182 15:01:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:47.182 15:01:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:47.182 15:01:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:47.182 15:01:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:47.182 15:01:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:47.182 15:01:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:47.182 15:01:16 -- scripts/common.sh@335 -- # IFS=.-: 00:10:47.182 15:01:16 -- scripts/common.sh@335 -- # read -ra ver1 00:10:47.182 15:01:16 -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.182 15:01:16 -- scripts/common.sh@336 -- # read -ra ver2 00:10:47.182 15:01:16 -- scripts/common.sh@337 -- # local 'op=<' 00:10:47.182 15:01:16 -- scripts/common.sh@339 -- # ver1_l=2 00:10:47.182 15:01:16 -- scripts/common.sh@340 -- # ver2_l=1 00:10:47.182 15:01:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:47.182 15:01:16 -- scripts/common.sh@343 -- # case "$op" in 00:10:47.182 15:01:16 -- scripts/common.sh@344 -- # : 1 00:10:47.182 15:01:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:47.182 15:01:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.182 15:01:16 -- scripts/common.sh@364 -- # decimal 1 00:10:47.182 15:01:16 -- scripts/common.sh@352 -- # local d=1 00:10:47.182 15:01:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.182 15:01:16 -- scripts/common.sh@354 -- # echo 1 00:10:47.182 15:01:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:47.182 15:01:16 -- scripts/common.sh@365 -- # decimal 2 00:10:47.182 15:01:16 -- scripts/common.sh@352 -- # local d=2 00:10:47.182 15:01:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.182 15:01:16 -- scripts/common.sh@354 -- # echo 2 00:10:47.182 15:01:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:47.182 15:01:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:47.182 15:01:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:47.182 15:01:16 -- scripts/common.sh@367 -- # return 0 00:10:47.182 15:01:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.182 15:01:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:47.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.182 --rc genhtml_branch_coverage=1 00:10:47.182 --rc genhtml_function_coverage=1 00:10:47.182 --rc genhtml_legend=1 00:10:47.182 --rc geninfo_all_blocks=1 00:10:47.182 --rc geninfo_unexecuted_blocks=1 00:10:47.182 00:10:47.182 ' 00:10:47.182 15:01:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:47.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.182 --rc genhtml_branch_coverage=1 00:10:47.182 --rc genhtml_function_coverage=1 00:10:47.182 --rc genhtml_legend=1 00:10:47.182 --rc geninfo_all_blocks=1 00:10:47.182 --rc geninfo_unexecuted_blocks=1 00:10:47.182 00:10:47.182 ' 00:10:47.182 15:01:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:47.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.182 --rc genhtml_branch_coverage=1 00:10:47.182 --rc genhtml_function_coverage=1 00:10:47.182 --rc genhtml_legend=1 00:10:47.182 --rc geninfo_all_blocks=1 00:10:47.182 --rc geninfo_unexecuted_blocks=1 00:10:47.182 00:10:47.182 ' 00:10:47.182 15:01:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:47.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.182 --rc genhtml_branch_coverage=1 00:10:47.182 --rc genhtml_function_coverage=1 00:10:47.182 --rc genhtml_legend=1 00:10:47.182 --rc geninfo_all_blocks=1 00:10:47.182 --rc geninfo_unexecuted_blocks=1 00:10:47.182 00:10:47.182 ' 00:10:47.182 15:01:16 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:47.182 15:01:16 -- nvmf/common.sh@7 -- # uname -s 00:10:47.182 15:01:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.182 15:01:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.182 15:01:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.182 15:01:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.182 15:01:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.182 15:01:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.182 15:01:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.182 15:01:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.182 15:01:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.182 15:01:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.183 15:01:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:10:47.183 15:01:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:10:47.183 15:01:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.183 15:01:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.183 15:01:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:47.183 15:01:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:47.183 15:01:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.183 15:01:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.183 15:01:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.183 15:01:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.183 15:01:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.183 15:01:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.183 15:01:16 -- paths/export.sh@5 -- # export PATH 00:10:47.183 15:01:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.183 15:01:16 -- nvmf/common.sh@46 -- # : 0 00:10:47.183 15:01:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:47.183 15:01:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:47.183 15:01:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:47.183 15:01:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.183 15:01:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.183 15:01:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:47.183 15:01:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:47.183 15:01:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:47.183 15:01:16 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:47.183 15:01:16 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:47.183 15:01:16 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:47.183 15:01:16 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.183 15:01:16 -- target/multipath.sh@43 -- # nvmftestinit 00:10:47.183 15:01:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:47.183 15:01:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.183 15:01:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:47.183 15:01:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:47.183 15:01:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:47.183 15:01:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.183 15:01:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:47.183 15:01:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.183 15:01:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:47.183 15:01:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:47.183 15:01:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:47.183 15:01:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:47.183 15:01:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:47.183 15:01:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:47.183 15:01:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:47.183 15:01:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:47.183 15:01:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:47.183 15:01:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:47.183 15:01:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:47.183 15:01:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:47.183 15:01:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:47.183 15:01:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:47.183 15:01:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:47.183 15:01:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:47.183 15:01:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:47.183 15:01:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:47.183 15:01:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:47.183 15:01:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:47.183 Cannot find device "nvmf_tgt_br" 00:10:47.183 15:01:16 -- nvmf/common.sh@154 -- # true 00:10:47.183 15:01:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:47.183 Cannot find device "nvmf_tgt_br2" 00:10:47.183 15:01:16 -- nvmf/common.sh@155 -- # true 00:10:47.183 15:01:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:47.183 15:01:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:47.183 Cannot find device "nvmf_tgt_br" 00:10:47.183 15:01:16 -- nvmf/common.sh@157 -- # true 00:10:47.183 15:01:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:47.183 Cannot find device "nvmf_tgt_br2" 00:10:47.183 15:01:16 -- nvmf/common.sh@158 -- # true 00:10:47.183 15:01:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:47.183 15:01:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:47.183 15:01:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:47.183 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:47.183 15:01:16 -- nvmf/common.sh@161 -- # true 00:10:47.183 15:01:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:47.183 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:47.183 15:01:16 -- nvmf/common.sh@162 -- # true 00:10:47.183 15:01:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:47.183 15:01:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:47.183 15:01:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:47.183 15:01:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:47.183 15:01:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:47.183 15:01:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:47.183 15:01:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:47.183 15:01:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:47.183 15:01:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:47.183 15:01:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:47.183 15:01:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:47.183 15:01:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:47.183 15:01:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:47.183 15:01:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:47.183 15:01:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:47.183 15:01:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:47.183 15:01:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:47.183 15:01:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:47.183 15:01:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:47.183 15:01:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:47.183 15:01:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:47.183 15:01:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:47.183 15:01:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:47.183 15:01:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:47.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:10:47.183 00:10:47.183 --- 10.0.0.2 ping statistics --- 00:10:47.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.183 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:10:47.183 15:01:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:47.183 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:47.183 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:10:47.183 00:10:47.183 --- 10.0.0.3 ping statistics --- 00:10:47.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.183 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:10:47.183 15:01:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:47.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:47.183 00:10:47.183 --- 10.0.0.1 ping statistics --- 00:10:47.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.183 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:47.183 15:01:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.184 15:01:16 -- nvmf/common.sh@421 -- # return 0 00:10:47.184 15:01:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:47.184 15:01:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.184 15:01:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:47.184 15:01:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:47.184 15:01:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.184 15:01:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:47.184 15:01:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:47.184 15:01:16 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:10:47.184 15:01:16 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:47.184 15:01:16 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:47.184 15:01:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:47.184 15:01:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:47.184 15:01:16 -- common/autotest_common.sh@10 -- # set +x 00:10:47.184 15:01:16 -- nvmf/common.sh@469 -- # nvmfpid=74083 00:10:47.184 15:01:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:47.184 15:01:16 -- nvmf/common.sh@470 -- # waitforlisten 74083 00:10:47.184 15:01:16 -- common/autotest_common.sh@829 -- # '[' -z 74083 ']' 00:10:47.184 15:01:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.184 15:01:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:47.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.184 15:01:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.184 15:01:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:47.184 15:01:16 -- common/autotest_common.sh@10 -- # set +x 00:10:47.184 [2024-11-20 15:01:17.052971] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:10:47.184 [2024-11-20 15:01:17.053105] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.184 [2024-11-20 15:01:17.196385] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:47.184 [2024-11-20 15:01:17.242903] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:47.184 [2024-11-20 15:01:17.243334] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.184 [2024-11-20 15:01:17.243401] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.184 [2024-11-20 15:01:17.243711] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.184 [2024-11-20 15:01:17.243982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.184 [2024-11-20 15:01:17.244169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:47.184 [2024-11-20 15:01:17.244173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.184 [2024-11-20 15:01:17.244073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.443 15:01:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:47.443 15:01:18 -- common/autotest_common.sh@862 -- # return 0 00:10:47.443 15:01:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:47.443 15:01:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:47.443 15:01:18 -- common/autotest_common.sh@10 -- # set +x 00:10:47.443 15:01:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.443 15:01:18 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:47.702 [2024-11-20 15:01:18.421098] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:47.702 15:01:18 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:48.268 Malloc0 00:10:48.268 15:01:18 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:48.526 15:01:19 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:48.784 15:01:19 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.043 [2024-11-20 15:01:19.748094] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.043 15:01:19 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:49.300 [2024-11-20 15:01:20.028403] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:49.300 15:01:20 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece --hostid=79d23ff5-0d62-4c46-bc89-ee8b51440ece -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:10:49.598 15:01:20 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece --hostid=79d23ff5-0d62-4c46-bc89-ee8b51440ece -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:49.598 15:01:20 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:49.598 15:01:20 -- common/autotest_common.sh@1187 -- # local i=0 00:10:49.598 15:01:20 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:10:49.598 15:01:20 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:10:49.598 15:01:20 -- common/autotest_common.sh@1194 -- # sleep 2 00:10:52.131 15:01:22 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:10:52.131 15:01:22 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:10:52.131 15:01:22 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:10:52.131 15:01:22 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:10:52.131 15:01:22 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:10:52.131 15:01:22 -- common/autotest_common.sh@1197 -- # return 0 00:10:52.131 15:01:22 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:52.131 15:01:22 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:52.131 15:01:22 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:52.131 15:01:22 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:52.132 15:01:22 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:52.132 15:01:22 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:52.132 15:01:22 -- target/multipath.sh@38 -- # return 0 00:10:52.132 15:01:22 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:52.132 15:01:22 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:52.132 15:01:22 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:52.132 15:01:22 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:52.132 15:01:22 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:52.132 15:01:22 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:52.132 15:01:22 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:52.132 15:01:22 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:52.132 15:01:22 -- target/multipath.sh@22 -- # local timeout=20 00:10:52.132 15:01:22 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:52.132 15:01:22 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:52.132 15:01:22 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:52.132 15:01:22 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:52.132 15:01:22 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:52.132 15:01:22 -- target/multipath.sh@22 -- # local timeout=20 00:10:52.132 15:01:22 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:52.132 15:01:22 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:52.132 15:01:22 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:52.132 15:01:22 -- target/multipath.sh@85 -- # echo numa 00:10:52.132 15:01:22 -- target/multipath.sh@88 -- # fio_pid=74181 00:10:52.132 15:01:22 -- target/multipath.sh@90 -- # sleep 1 00:10:52.132 15:01:22 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:52.132 [global] 00:10:52.132 thread=1 00:10:52.132 invalidate=1 00:10:52.132 rw=randrw 00:10:52.132 time_based=1 00:10:52.132 runtime=6 00:10:52.132 ioengine=libaio 00:10:52.132 direct=1 00:10:52.132 bs=4096 00:10:52.132 iodepth=128 00:10:52.132 norandommap=0 00:10:52.132 numjobs=1 00:10:52.132 00:10:52.132 verify_dump=1 00:10:52.132 verify_backlog=512 00:10:52.132 verify_state_save=0 00:10:52.132 do_verify=1 00:10:52.132 verify=crc32c-intel 00:10:52.132 [job0] 00:10:52.132 filename=/dev/nvme0n1 00:10:52.132 Could not set queue depth (nvme0n1) 00:10:52.132 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:52.132 fio-3.35 00:10:52.132 Starting 1 thread 00:10:52.699 15:01:23 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:52.958 15:01:23 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:53.216 15:01:23 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:53.216 15:01:23 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:53.216 15:01:23 -- target/multipath.sh@22 -- # local timeout=20 00:10:53.216 15:01:23 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:53.216 15:01:23 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:53.216 15:01:23 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:53.216 15:01:23 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:53.216 15:01:23 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:53.216 15:01:23 -- target/multipath.sh@22 -- # local timeout=20 00:10:53.216 15:01:23 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:53.216 15:01:23 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:53.216 15:01:23 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:53.216 15:01:23 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:53.474 15:01:24 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:53.733 15:01:24 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:53.733 15:01:24 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:53.733 15:01:24 -- target/multipath.sh@22 -- # local timeout=20 00:10:53.733 15:01:24 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:53.733 15:01:24 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:53.733 15:01:24 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:53.733 15:01:24 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:53.733 15:01:24 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:53.733 15:01:24 -- target/multipath.sh@22 -- # local timeout=20 00:10:53.733 15:01:24 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:53.733 15:01:24 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:53.733 15:01:24 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:53.733 15:01:24 -- target/multipath.sh@104 -- # wait 74181 00:10:57.966 00:10:57.966 job0: (groupid=0, jobs=1): err= 0: pid=74202: Wed Nov 20 15:01:28 2024 00:10:57.966 read: IOPS=10.9k, BW=42.5MiB/s (44.6MB/s)(255MiB/6006msec) 00:10:57.966 slat (usec): min=7, max=7698, avg=54.01, stdev=225.19 00:10:57.966 clat (usec): min=1198, max=15409, avg=8041.14, stdev=1455.42 00:10:57.966 lat (usec): min=1209, max=15442, avg=8095.16, stdev=1459.90 00:10:57.966 clat percentiles (usec): 00:10:57.966 | 1.00th=[ 4228], 5.00th=[ 5932], 10.00th=[ 6783], 20.00th=[ 7242], 00:10:57.966 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7898], 60.00th=[ 8094], 00:10:57.966 | 70.00th=[ 8356], 80.00th=[ 8586], 90.00th=[ 9372], 95.00th=[11469], 00:10:57.966 | 99.00th=[12649], 99.50th=[12911], 99.90th=[13173], 99.95th=[13566], 00:10:57.966 | 99.99th=[13829] 00:10:57.966 bw ( KiB/s): min=11384, max=27960, per=51.27%, avg=22319.27, stdev=5716.08, samples=11 00:10:57.966 iops : min= 2846, max= 6990, avg=5579.82, stdev=1429.02, samples=11 00:10:57.966 write: IOPS=6222, BW=24.3MiB/s (25.5MB/s)(131MiB/5404msec); 0 zone resets 00:10:57.966 slat (usec): min=13, max=2227, avg=63.05, stdev=149.05 00:10:57.966 clat (usec): min=865, max=13702, avg=7025.94, stdev=1264.19 00:10:57.966 lat (usec): min=919, max=13726, avg=7088.99, stdev=1268.69 00:10:57.966 clat percentiles (usec): 00:10:57.966 | 1.00th=[ 3261], 5.00th=[ 4293], 10.00th=[ 5342], 20.00th=[ 6521], 00:10:57.966 | 30.00th=[ 6783], 40.00th=[ 7046], 50.00th=[ 7177], 60.00th=[ 7373], 00:10:57.966 | 70.00th=[ 7570], 80.00th=[ 7767], 90.00th=[ 8029], 95.00th=[ 8356], 00:10:57.966 | 99.00th=[11076], 99.50th=[11731], 99.90th=[12780], 99.95th=[13042], 00:10:57.966 | 99.99th=[13435] 00:10:57.966 bw ( KiB/s): min=11928, max=27056, per=89.69%, avg=22325.09, stdev=5320.04, samples=11 00:10:57.966 iops : min= 2982, max= 6764, avg=5581.27, stdev=1330.01, samples=11 00:10:57.966 lat (usec) : 1000=0.01% 00:10:57.966 lat (msec) : 2=0.02%, 4=1.60%, 10=92.22%, 20=6.16% 00:10:57.966 cpu : usr=6.03%, sys=23.30%, ctx=5762, majf=0, minf=90 00:10:57.966 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:57.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.966 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.966 issued rwts: total=65362,33627,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.966 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.966 00:10:57.966 Run status group 0 (all jobs): 00:10:57.966 READ: bw=42.5MiB/s (44.6MB/s), 42.5MiB/s-42.5MiB/s (44.6MB/s-44.6MB/s), io=255MiB (268MB), run=6006-6006msec 00:10:57.966 WRITE: bw=24.3MiB/s (25.5MB/s), 24.3MiB/s-24.3MiB/s (25.5MB/s-25.5MB/s), io=131MiB (138MB), run=5404-5404msec 00:10:57.966 00:10:57.966 Disk stats (read/write): 00:10:57.966 nvme0n1: ios=64414/32976, merge=0/0, ticks=493547/216099, in_queue=709646, util=98.65% 00:10:57.966 15:01:28 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:10:58.226 15:01:28 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:58.484 15:01:29 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:58.484 15:01:29 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:58.484 15:01:29 -- target/multipath.sh@22 -- # local timeout=20 00:10:58.484 15:01:29 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:58.484 15:01:29 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:58.484 15:01:29 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:58.484 15:01:29 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:58.484 15:01:29 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:58.484 15:01:29 -- target/multipath.sh@22 -- # local timeout=20 00:10:58.484 15:01:29 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:58.484 15:01:29 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:58.484 15:01:29 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:58.484 15:01:29 -- target/multipath.sh@113 -- # echo round-robin 00:10:58.484 15:01:29 -- target/multipath.sh@116 -- # fio_pid=74285 00:10:58.484 15:01:29 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:58.484 15:01:29 -- target/multipath.sh@118 -- # sleep 1 00:10:58.484 [global] 00:10:58.484 thread=1 00:10:58.484 invalidate=1 00:10:58.484 rw=randrw 00:10:58.484 time_based=1 00:10:58.484 runtime=6 00:10:58.484 ioengine=libaio 00:10:58.484 direct=1 00:10:58.484 bs=4096 00:10:58.484 iodepth=128 00:10:58.484 norandommap=0 00:10:58.484 numjobs=1 00:10:58.484 00:10:58.484 verify_dump=1 00:10:58.484 verify_backlog=512 00:10:58.484 verify_state_save=0 00:10:58.484 do_verify=1 00:10:58.484 verify=crc32c-intel 00:10:58.484 [job0] 00:10:58.484 filename=/dev/nvme0n1 00:10:58.743 Could not set queue depth (nvme0n1) 00:10:58.743 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:58.743 fio-3.35 00:10:58.743 Starting 1 thread 00:10:59.678 15:01:30 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:59.936 15:01:30 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:00.195 15:01:30 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:11:00.195 15:01:30 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:00.195 15:01:30 -- target/multipath.sh@22 -- # local timeout=20 00:11:00.195 15:01:30 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:00.195 15:01:30 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:00.195 15:01:30 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:00.195 15:01:30 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:11:00.195 15:01:30 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:00.195 15:01:30 -- target/multipath.sh@22 -- # local timeout=20 00:11:00.195 15:01:30 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:00.195 15:01:30 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:00.195 15:01:30 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:00.195 15:01:30 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:11:00.453 15:01:31 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:00.710 15:01:31 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:00.710 15:01:31 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:00.710 15:01:31 -- target/multipath.sh@22 -- # local timeout=20 00:11:00.710 15:01:31 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:00.710 15:01:31 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:00.710 15:01:31 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:00.710 15:01:31 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:00.710 15:01:31 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:00.710 15:01:31 -- target/multipath.sh@22 -- # local timeout=20 00:11:00.710 15:01:31 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:00.710 15:01:31 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:00.710 15:01:31 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:00.710 15:01:31 -- target/multipath.sh@132 -- # wait 74285 00:11:04.893 00:11:04.893 job0: (groupid=0, jobs=1): err= 0: pid=74306: Wed Nov 20 15:01:35 2024 00:11:04.893 read: IOPS=11.8k, BW=45.9MiB/s (48.2MB/s)(276MiB/6006msec) 00:11:04.893 slat (usec): min=6, max=5944, avg=42.89, stdev=196.79 00:11:04.893 clat (usec): min=292, max=19944, avg=7419.98, stdev=2197.73 00:11:04.893 lat (usec): min=302, max=19953, avg=7462.87, stdev=2210.77 00:11:04.893 clat percentiles (usec): 00:11:04.893 | 1.00th=[ 1680], 5.00th=[ 3785], 10.00th=[ 4686], 20.00th=[ 5735], 00:11:04.893 | 30.00th=[ 6718], 40.00th=[ 7242], 50.00th=[ 7570], 60.00th=[ 7832], 00:11:04.893 | 70.00th=[ 8160], 80.00th=[ 8586], 90.00th=[ 9765], 95.00th=[11338], 00:11:04.893 | 99.00th=[14091], 99.50th=[15139], 99.90th=[17695], 99.95th=[18482], 00:11:04.893 | 99.99th=[19268] 00:11:04.894 bw ( KiB/s): min= 9064, max=39952, per=53.01%, avg=24938.91, stdev=7797.35, samples=11 00:11:04.894 iops : min= 2266, max= 9988, avg=6234.73, stdev=1949.34, samples=11 00:11:04.894 write: IOPS=6795, BW=26.5MiB/s (27.8MB/s)(146MiB/5488msec); 0 zone resets 00:11:04.894 slat (usec): min=13, max=2519, avg=53.81, stdev=131.02 00:11:04.894 clat (usec): min=271, max=18069, avg=6391.61, stdev=1978.92 00:11:04.894 lat (usec): min=302, max=18096, avg=6445.42, stdev=1992.23 00:11:04.894 clat percentiles (usec): 00:11:04.894 | 1.00th=[ 1532], 5.00th=[ 2999], 10.00th=[ 3589], 20.00th=[ 4424], 00:11:04.894 | 30.00th=[ 5473], 40.00th=[ 6521], 50.00th=[ 6849], 60.00th=[ 7177], 00:11:04.894 | 70.00th=[ 7439], 80.00th=[ 7701], 90.00th=[ 8291], 95.00th=[ 9110], 00:11:04.894 | 99.00th=[11469], 99.50th=[12518], 99.90th=[14353], 99.95th=[14746], 00:11:04.894 | 99.99th=[17433] 00:11:04.894 bw ( KiB/s): min= 9584, max=39040, per=91.68%, avg=24918.55, stdev=7559.84, samples=11 00:11:04.894 iops : min= 2396, max= 9760, avg=6229.64, stdev=1889.96, samples=11 00:11:04.894 lat (usec) : 500=0.06%, 750=0.12%, 1000=0.27% 00:11:04.894 lat (msec) : 2=0.91%, 4=7.71%, 10=84.02%, 20=6.91% 00:11:04.894 cpu : usr=6.26%, sys=25.10%, ctx=6298, majf=0, minf=90 00:11:04.894 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:11:04.894 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.894 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:04.894 issued rwts: total=70631,37291,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.894 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:04.894 00:11:04.894 Run status group 0 (all jobs): 00:11:04.894 READ: bw=45.9MiB/s (48.2MB/s), 45.9MiB/s-45.9MiB/s (48.2MB/s-48.2MB/s), io=276MiB (289MB), run=6006-6006msec 00:11:04.894 WRITE: bw=26.5MiB/s (27.8MB/s), 26.5MiB/s-26.5MiB/s (27.8MB/s-27.8MB/s), io=146MiB (153MB), run=5488-5488msec 00:11:04.894 00:11:04.894 Disk stats (read/write): 00:11:04.894 nvme0n1: ios=69679/36630, merge=0/0, ticks=487867/215782, in_queue=703649, util=98.65% 00:11:04.894 15:01:35 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:04.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:04.894 15:01:35 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:04.894 15:01:35 -- common/autotest_common.sh@1208 -- # local i=0 00:11:04.894 15:01:35 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:04.894 15:01:35 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:04.894 15:01:35 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:04.894 15:01:35 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:04.894 15:01:35 -- common/autotest_common.sh@1220 -- # return 0 00:11:04.894 15:01:35 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:05.152 15:01:35 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:11:05.152 15:01:35 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:11:05.152 15:01:35 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:11:05.152 15:01:35 -- target/multipath.sh@144 -- # nvmftestfini 00:11:05.152 15:01:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:05.152 15:01:35 -- nvmf/common.sh@116 -- # sync 00:11:05.411 15:01:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:05.411 15:01:35 -- nvmf/common.sh@119 -- # set +e 00:11:05.411 15:01:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:05.411 15:01:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:05.411 rmmod nvme_tcp 00:11:05.411 rmmod nvme_fabrics 00:11:05.411 rmmod nvme_keyring 00:11:05.411 15:01:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:05.411 15:01:36 -- nvmf/common.sh@123 -- # set -e 00:11:05.411 15:01:36 -- nvmf/common.sh@124 -- # return 0 00:11:05.411 15:01:36 -- nvmf/common.sh@477 -- # '[' -n 74083 ']' 00:11:05.411 15:01:36 -- nvmf/common.sh@478 -- # killprocess 74083 00:11:05.411 15:01:36 -- common/autotest_common.sh@936 -- # '[' -z 74083 ']' 00:11:05.411 15:01:36 -- common/autotest_common.sh@940 -- # kill -0 74083 00:11:05.411 15:01:36 -- common/autotest_common.sh@941 -- # uname 00:11:05.411 15:01:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:05.411 15:01:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74083 00:11:05.411 15:01:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:05.411 15:01:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:05.411 killing process with pid 74083 00:11:05.411 15:01:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74083' 00:11:05.411 15:01:36 -- common/autotest_common.sh@955 -- # kill 74083 00:11:05.411 15:01:36 -- common/autotest_common.sh@960 -- # wait 74083 00:11:05.411 15:01:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:05.411 15:01:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:05.411 15:01:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:05.411 15:01:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:05.411 15:01:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:05.411 15:01:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.411 15:01:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:05.411 15:01:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.671 15:01:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:05.671 00:11:05.671 real 0m19.803s 00:11:05.671 user 1m15.063s 00:11:05.671 sys 0m9.819s 00:11:05.671 15:01:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:05.671 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:11:05.671 ************************************ 00:11:05.671 END TEST nvmf_multipath 00:11:05.671 ************************************ 00:11:05.671 15:01:36 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:05.671 15:01:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:05.671 15:01:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:05.671 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:11:05.671 ************************************ 00:11:05.671 START TEST nvmf_zcopy 00:11:05.671 ************************************ 00:11:05.671 15:01:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:05.671 * Looking for test storage... 00:11:05.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:05.671 15:01:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:05.671 15:01:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:05.671 15:01:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:05.671 15:01:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:05.671 15:01:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:05.671 15:01:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:05.671 15:01:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:05.671 15:01:36 -- scripts/common.sh@335 -- # IFS=.-: 00:11:05.671 15:01:36 -- scripts/common.sh@335 -- # read -ra ver1 00:11:05.671 15:01:36 -- scripts/common.sh@336 -- # IFS=.-: 00:11:05.671 15:01:36 -- scripts/common.sh@336 -- # read -ra ver2 00:11:05.671 15:01:36 -- scripts/common.sh@337 -- # local 'op=<' 00:11:05.671 15:01:36 -- scripts/common.sh@339 -- # ver1_l=2 00:11:05.671 15:01:36 -- scripts/common.sh@340 -- # ver2_l=1 00:11:05.671 15:01:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:05.671 15:01:36 -- scripts/common.sh@343 -- # case "$op" in 00:11:05.671 15:01:36 -- scripts/common.sh@344 -- # : 1 00:11:05.671 15:01:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:05.671 15:01:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:05.671 15:01:36 -- scripts/common.sh@364 -- # decimal 1 00:11:05.671 15:01:36 -- scripts/common.sh@352 -- # local d=1 00:11:05.671 15:01:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:05.671 15:01:36 -- scripts/common.sh@354 -- # echo 1 00:11:05.671 15:01:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:05.671 15:01:36 -- scripts/common.sh@365 -- # decimal 2 00:11:05.671 15:01:36 -- scripts/common.sh@352 -- # local d=2 00:11:05.671 15:01:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:05.671 15:01:36 -- scripts/common.sh@354 -- # echo 2 00:11:05.671 15:01:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:05.671 15:01:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:05.671 15:01:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:05.671 15:01:36 -- scripts/common.sh@367 -- # return 0 00:11:05.671 15:01:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:05.671 15:01:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:05.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.671 --rc genhtml_branch_coverage=1 00:11:05.671 --rc genhtml_function_coverage=1 00:11:05.671 --rc genhtml_legend=1 00:11:05.671 --rc geninfo_all_blocks=1 00:11:05.671 --rc geninfo_unexecuted_blocks=1 00:11:05.671 00:11:05.671 ' 00:11:05.671 15:01:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:05.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.671 --rc genhtml_branch_coverage=1 00:11:05.671 --rc genhtml_function_coverage=1 00:11:05.671 --rc genhtml_legend=1 00:11:05.671 --rc geninfo_all_blocks=1 00:11:05.671 --rc geninfo_unexecuted_blocks=1 00:11:05.671 00:11:05.671 ' 00:11:05.671 15:01:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:05.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.671 --rc genhtml_branch_coverage=1 00:11:05.671 --rc genhtml_function_coverage=1 00:11:05.671 --rc genhtml_legend=1 00:11:05.671 --rc geninfo_all_blocks=1 00:11:05.671 --rc geninfo_unexecuted_blocks=1 00:11:05.671 00:11:05.671 ' 00:11:05.671 15:01:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:05.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:05.671 --rc genhtml_branch_coverage=1 00:11:05.671 --rc genhtml_function_coverage=1 00:11:05.671 --rc genhtml_legend=1 00:11:05.671 --rc geninfo_all_blocks=1 00:11:05.671 --rc geninfo_unexecuted_blocks=1 00:11:05.671 00:11:05.671 ' 00:11:05.671 15:01:36 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:05.671 15:01:36 -- nvmf/common.sh@7 -- # uname -s 00:11:05.929 15:01:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:05.929 15:01:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:05.929 15:01:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:05.929 15:01:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:05.929 15:01:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:05.929 15:01:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:05.929 15:01:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:05.929 15:01:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:05.929 15:01:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:05.929 15:01:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:05.929 15:01:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:11:05.929 15:01:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:11:05.929 15:01:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:05.929 15:01:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:05.929 15:01:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:05.929 15:01:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:05.929 15:01:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.929 15:01:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.929 15:01:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.929 15:01:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.929 15:01:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.929 15:01:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.929 15:01:36 -- paths/export.sh@5 -- # export PATH 00:11:05.929 15:01:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.929 15:01:36 -- nvmf/common.sh@46 -- # : 0 00:11:05.929 15:01:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:05.929 15:01:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:05.929 15:01:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:05.929 15:01:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:05.929 15:01:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:05.929 15:01:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:05.929 15:01:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:05.929 15:01:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:05.929 15:01:36 -- target/zcopy.sh@12 -- # nvmftestinit 00:11:05.929 15:01:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:05.929 15:01:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:05.929 15:01:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:05.929 15:01:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:05.929 15:01:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:05.929 15:01:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.929 15:01:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:05.929 15:01:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.929 15:01:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:05.929 15:01:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:05.929 15:01:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:05.929 15:01:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:05.929 15:01:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:05.929 15:01:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:05.929 15:01:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:05.929 15:01:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:05.929 15:01:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:05.929 15:01:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:05.929 15:01:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:05.929 15:01:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:05.929 15:01:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:05.929 15:01:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:05.929 15:01:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:05.929 15:01:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:05.929 15:01:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:05.929 15:01:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:05.929 15:01:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:05.929 15:01:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:05.929 Cannot find device "nvmf_tgt_br" 00:11:05.929 15:01:36 -- nvmf/common.sh@154 -- # true 00:11:05.929 15:01:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:05.929 Cannot find device "nvmf_tgt_br2" 00:11:05.929 15:01:36 -- nvmf/common.sh@155 -- # true 00:11:05.929 15:01:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:05.929 15:01:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:05.929 Cannot find device "nvmf_tgt_br" 00:11:05.929 15:01:36 -- nvmf/common.sh@157 -- # true 00:11:05.929 15:01:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:05.930 Cannot find device "nvmf_tgt_br2" 00:11:05.930 15:01:36 -- nvmf/common.sh@158 -- # true 00:11:05.930 15:01:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:05.930 15:01:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:05.930 15:01:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:05.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:05.930 15:01:36 -- nvmf/common.sh@161 -- # true 00:11:05.930 15:01:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:05.930 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:05.930 15:01:36 -- nvmf/common.sh@162 -- # true 00:11:05.930 15:01:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:05.930 15:01:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:05.930 15:01:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:05.930 15:01:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:05.930 15:01:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:05.930 15:01:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:05.930 15:01:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:05.930 15:01:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:05.930 15:01:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:05.930 15:01:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:05.930 15:01:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:05.930 15:01:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:05.930 15:01:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:05.930 15:01:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:05.930 15:01:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:05.930 15:01:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:06.188 15:01:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:06.188 15:01:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:06.188 15:01:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:06.188 15:01:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:06.188 15:01:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:06.188 15:01:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:06.188 15:01:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:06.188 15:01:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:06.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:06.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:11:06.188 00:11:06.188 --- 10.0.0.2 ping statistics --- 00:11:06.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.188 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:11:06.188 15:01:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:06.188 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:06.188 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:11:06.188 00:11:06.188 --- 10.0.0.3 ping statistics --- 00:11:06.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.188 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:11:06.188 15:01:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:06.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:06.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:11:06.188 00:11:06.188 --- 10.0.0.1 ping statistics --- 00:11:06.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.188 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:11:06.188 15:01:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:06.188 15:01:36 -- nvmf/common.sh@421 -- # return 0 00:11:06.188 15:01:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:06.188 15:01:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:06.188 15:01:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:06.188 15:01:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:06.188 15:01:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:06.188 15:01:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:06.188 15:01:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:06.188 15:01:36 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:06.188 15:01:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:06.188 15:01:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:06.188 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:11:06.188 15:01:36 -- nvmf/common.sh@469 -- # nvmfpid=74579 00:11:06.188 15:01:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:06.188 15:01:36 -- nvmf/common.sh@470 -- # waitforlisten 74579 00:11:06.188 15:01:36 -- common/autotest_common.sh@829 -- # '[' -z 74579 ']' 00:11:06.188 15:01:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.188 15:01:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:06.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.188 15:01:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.188 15:01:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:06.188 15:01:36 -- common/autotest_common.sh@10 -- # set +x 00:11:06.188 [2024-11-20 15:01:36.889082] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:06.188 [2024-11-20 15:01:36.889187] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.446 [2024-11-20 15:01:37.026123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.446 [2024-11-20 15:01:37.065919] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:06.446 [2024-11-20 15:01:37.066112] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:06.446 [2024-11-20 15:01:37.066140] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:06.446 [2024-11-20 15:01:37.066156] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:06.446 [2024-11-20 15:01:37.066196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.446 15:01:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:06.446 15:01:37 -- common/autotest_common.sh@862 -- # return 0 00:11:06.446 15:01:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:06.446 15:01:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:06.446 15:01:37 -- common/autotest_common.sh@10 -- # set +x 00:11:06.446 15:01:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.446 15:01:37 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:06.447 15:01:37 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:06.447 15:01:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.447 15:01:37 -- common/autotest_common.sh@10 -- # set +x 00:11:06.447 [2024-11-20 15:01:37.184162] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.447 15:01:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.447 15:01:37 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:06.447 15:01:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.447 15:01:37 -- common/autotest_common.sh@10 -- # set +x 00:11:06.447 15:01:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.447 15:01:37 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.447 15:01:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.447 15:01:37 -- common/autotest_common.sh@10 -- # set +x 00:11:06.447 [2024-11-20 15:01:37.200309] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:06.447 15:01:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.447 15:01:37 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:06.447 15:01:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.447 15:01:37 -- common/autotest_common.sh@10 -- # set +x 00:11:06.447 15:01:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.447 15:01:37 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:06.447 15:01:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.447 15:01:37 -- common/autotest_common.sh@10 -- # set +x 00:11:06.447 malloc0 00:11:06.447 15:01:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.447 15:01:37 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:06.447 15:01:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.447 15:01:37 -- common/autotest_common.sh@10 -- # set +x 00:11:06.447 15:01:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.447 15:01:37 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:06.447 15:01:37 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:06.447 15:01:37 -- nvmf/common.sh@520 -- # config=() 00:11:06.447 15:01:37 -- nvmf/common.sh@520 -- # local subsystem config 00:11:06.447 15:01:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:11:06.447 15:01:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:11:06.447 { 00:11:06.447 "params": { 00:11:06.447 "name": "Nvme$subsystem", 00:11:06.447 "trtype": "$TEST_TRANSPORT", 00:11:06.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:06.447 "adrfam": "ipv4", 00:11:06.447 "trsvcid": "$NVMF_PORT", 00:11:06.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:06.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:06.447 "hdgst": ${hdgst:-false}, 00:11:06.447 "ddgst": ${ddgst:-false} 00:11:06.447 }, 00:11:06.447 "method": "bdev_nvme_attach_controller" 00:11:06.447 } 00:11:06.447 EOF 00:11:06.447 )") 00:11:06.447 15:01:37 -- nvmf/common.sh@542 -- # cat 00:11:06.447 15:01:37 -- nvmf/common.sh@544 -- # jq . 00:11:06.447 15:01:37 -- nvmf/common.sh@545 -- # IFS=, 00:11:06.447 15:01:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:11:06.447 "params": { 00:11:06.447 "name": "Nvme1", 00:11:06.447 "trtype": "tcp", 00:11:06.447 "traddr": "10.0.0.2", 00:11:06.447 "adrfam": "ipv4", 00:11:06.447 "trsvcid": "4420", 00:11:06.447 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:06.447 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:06.447 "hdgst": false, 00:11:06.447 "ddgst": false 00:11:06.447 }, 00:11:06.447 "method": "bdev_nvme_attach_controller" 00:11:06.447 }' 00:11:06.704 [2024-11-20 15:01:37.279964] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:06.704 [2024-11-20 15:01:37.280039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74609 ] 00:11:06.704 [2024-11-20 15:01:37.416910] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.704 [2024-11-20 15:01:37.451464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.963 Running I/O for 10 seconds... 00:11:17.035 00:11:17.035 Latency(us) 00:11:17.035 [2024-11-20T15:01:47.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:17.035 [2024-11-20T15:01:47.839Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:17.035 Verification LBA range: start 0x0 length 0x1000 00:11:17.035 Nvme1n1 : 10.01 8818.15 68.89 0.00 0.00 14477.16 1131.99 22758.87 00:11:17.035 [2024-11-20T15:01:47.839Z] =================================================================================================================== 00:11:17.035 [2024-11-20T15:01:47.839Z] Total : 8818.15 68.89 0.00 0.00 14477.16 1131.99 22758.87 00:11:17.035 15:01:47 -- target/zcopy.sh@39 -- # perfpid=74722 00:11:17.035 15:01:47 -- target/zcopy.sh@41 -- # xtrace_disable 00:11:17.035 15:01:47 -- common/autotest_common.sh@10 -- # set +x 00:11:17.035 15:01:47 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:17.035 15:01:47 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:17.035 15:01:47 -- nvmf/common.sh@520 -- # config=() 00:11:17.035 15:01:47 -- nvmf/common.sh@520 -- # local subsystem config 00:11:17.035 15:01:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:11:17.035 15:01:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:11:17.035 { 00:11:17.035 "params": { 00:11:17.035 "name": "Nvme$subsystem", 00:11:17.035 "trtype": "$TEST_TRANSPORT", 00:11:17.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:17.035 "adrfam": "ipv4", 00:11:17.035 "trsvcid": "$NVMF_PORT", 00:11:17.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:17.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:17.035 "hdgst": ${hdgst:-false}, 00:11:17.035 "ddgst": ${ddgst:-false} 00:11:17.035 }, 00:11:17.035 "method": "bdev_nvme_attach_controller" 00:11:17.035 } 00:11:17.035 EOF 00:11:17.035 )") 00:11:17.035 15:01:47 -- nvmf/common.sh@542 -- # cat 00:11:17.035 15:01:47 -- nvmf/common.sh@544 -- # jq . 00:11:17.035 [2024-11-20 15:01:47.744827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.035 [2024-11-20 15:01:47.744878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.035 15:01:47 -- nvmf/common.sh@545 -- # IFS=, 00:11:17.035 15:01:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:11:17.035 "params": { 00:11:17.035 "name": "Nvme1", 00:11:17.035 "trtype": "tcp", 00:11:17.035 "traddr": "10.0.0.2", 00:11:17.035 "adrfam": "ipv4", 00:11:17.035 "trsvcid": "4420", 00:11:17.035 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:17.035 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:17.035 "hdgst": false, 00:11:17.035 "ddgst": false 00:11:17.035 }, 00:11:17.035 "method": "bdev_nvme_attach_controller" 00:11:17.035 }' 00:11:17.035 [2024-11-20 15:01:47.752791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.035 [2024-11-20 15:01:47.752825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.035 [2024-11-20 15:01:47.760764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.035 [2024-11-20 15:01:47.760801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.035 [2024-11-20 15:01:47.768765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.035 [2024-11-20 15:01:47.768792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.035 [2024-11-20 15:01:47.780815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.035 [2024-11-20 15:01:47.780861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.035 [2024-11-20 15:01:47.786471] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:17.035 [2024-11-20 15:01:47.786557] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74722 ] 00:11:17.035 [2024-11-20 15:01:47.792803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.035 [2024-11-20 15:01:47.792837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.035 [2024-11-20 15:01:47.800779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.035 [2024-11-20 15:01:47.800807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.035 [2024-11-20 15:01:47.808780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.035 [2024-11-20 15:01:47.808807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.035 [2024-11-20 15:01:47.820803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.035 [2024-11-20 15:01:47.820833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.035 [2024-11-20 15:01:47.832788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.035 [2024-11-20 15:01:47.832814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.295 [2024-11-20 15:01:47.844789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.295 [2024-11-20 15:01:47.844815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.295 [2024-11-20 15:01:47.856793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.295 [2024-11-20 15:01:47.856822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.295 [2024-11-20 15:01:47.868805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.295 [2024-11-20 15:01:47.868831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.295 [2024-11-20 15:01:47.880820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.295 [2024-11-20 15:01:47.880854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.295 [2024-11-20 15:01:47.892807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.295 [2024-11-20 15:01:47.892835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.295 [2024-11-20 15:01:47.904804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.295 [2024-11-20 15:01:47.904830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.295 [2024-11-20 15:01:47.916809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.295 [2024-11-20 15:01:47.916835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.295 [2024-11-20 15:01:47.924604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.295 [2024-11-20 15:01:47.928827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.295 [2024-11-20 15:01:47.928860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.295 [2024-11-20 15:01:47.940840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.295 [2024-11-20 15:01:47.940874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.295 [2024-11-20 15:01:47.952850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.295 [2024-11-20 15:01:47.952888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.295 [2024-11-20 15:01:47.957930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.295 [2024-11-20 15:01:47.964823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.295 [2024-11-20 15:01:47.964849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.295 [2024-11-20 15:01:47.976857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.295 [2024-11-20 15:01:47.976894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.295 [2024-11-20 15:01:47.988871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.295 [2024-11-20 15:01:47.988910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.295 [2024-11-20 15:01:48.000869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.295 [2024-11-20 15:01:48.000908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.295 [2024-11-20 15:01:48.012853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.295 [2024-11-20 15:01:48.012886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.295 [2024-11-20 15:01:48.024862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.295 [2024-11-20 15:01:48.024895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.295 [2024-11-20 15:01:48.036880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.295 [2024-11-20 15:01:48.036919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.295 [2024-11-20 15:01:48.048884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.295 [2024-11-20 15:01:48.048918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.295 [2024-11-20 15:01:48.060891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.295 [2024-11-20 15:01:48.060922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.295 [2024-11-20 15:01:48.072904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.295 [2024-11-20 15:01:48.072935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.295 [2024-11-20 15:01:48.084948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.295 [2024-11-20 15:01:48.084981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.295 Running I/O for 5 seconds... 00:11:17.295 [2024-11-20 15:01:48.096963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.295 [2024-11-20 15:01:48.096996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.554 [2024-11-20 15:01:48.113806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.554 [2024-11-20 15:01:48.113840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.554 [2024-11-20 15:01:48.132695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.554 [2024-11-20 15:01:48.132731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.554 [2024-11-20 15:01:48.147287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.554 [2024-11-20 15:01:48.147323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.554 [2024-11-20 15:01:48.159744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.554 [2024-11-20 15:01:48.159775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.554 [2024-11-20 15:01:48.176553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.554 [2024-11-20 15:01:48.176585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.554 [2024-11-20 15:01:48.193152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.554 [2024-11-20 15:01:48.193183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.554 [2024-11-20 15:01:48.209727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.554 [2024-11-20 15:01:48.209758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.554 [2024-11-20 15:01:48.226771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.554 [2024-11-20 15:01:48.226809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.554 [2024-11-20 15:01:48.243619] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.554 [2024-11-20 15:01:48.243669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.554 [2024-11-20 15:01:48.260253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.554 [2024-11-20 15:01:48.260288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.554 [2024-11-20 15:01:48.277029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.554 [2024-11-20 15:01:48.277065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.554 [2024-11-20 15:01:48.293851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.554 [2024-11-20 15:01:48.293885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.554 [2024-11-20 15:01:48.310858] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.554 [2024-11-20 15:01:48.310897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.554 [2024-11-20 15:01:48.326325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.554 [2024-11-20 15:01:48.326359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.554 [2024-11-20 15:01:48.335462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.554 [2024-11-20 15:01:48.335508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.554 [2024-11-20 15:01:48.351664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.554 [2024-11-20 15:01:48.351725] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.814 [2024-11-20 15:01:48.370886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.814 [2024-11-20 15:01:48.370925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.814 [2024-11-20 15:01:48.385783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.814 [2024-11-20 15:01:48.385816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.814 [2024-11-20 15:01:48.405191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.814 [2024-11-20 15:01:48.405225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.814 [2024-11-20 15:01:48.419650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.814 [2024-11-20 15:01:48.419682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.814 [2024-11-20 15:01:48.431478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.814 [2024-11-20 15:01:48.431512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.814 [2024-11-20 15:01:48.448623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.814 [2024-11-20 15:01:48.448695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.814 [2024-11-20 15:01:48.463794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.814 [2024-11-20 15:01:48.463854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.814 [2024-11-20 15:01:48.473597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.814 [2024-11-20 15:01:48.473649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.814 [2024-11-20 15:01:48.490079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.814 [2024-11-20 15:01:48.490137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.814 [2024-11-20 15:01:48.507131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.814 [2024-11-20 15:01:48.507189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.814 [2024-11-20 15:01:48.523562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.814 [2024-11-20 15:01:48.523620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.814 [2024-11-20 15:01:48.540385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.814 [2024-11-20 15:01:48.540432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.814 [2024-11-20 15:01:48.556618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.814 [2024-11-20 15:01:48.556696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.814 [2024-11-20 15:01:48.573632] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.814 [2024-11-20 15:01:48.573698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.814 [2024-11-20 15:01:48.589766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.814 [2024-11-20 15:01:48.589805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:17.814 [2024-11-20 15:01:48.606974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:17.814 [2024-11-20 15:01:48.607030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.073 [2024-11-20 15:01:48.622492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.073 [2024-11-20 15:01:48.622546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.073 [2024-11-20 15:01:48.631612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.073 [2024-11-20 15:01:48.631665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.073 [2024-11-20 15:01:48.647828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.073 [2024-11-20 15:01:48.647870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.073 [2024-11-20 15:01:48.665385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.073 [2024-11-20 15:01:48.665419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.073 [2024-11-20 15:01:48.681832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.073 [2024-11-20 15:01:48.681878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.073 [2024-11-20 15:01:48.698717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.073 [2024-11-20 15:01:48.698784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.073 [2024-11-20 15:01:48.715268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.073 [2024-11-20 15:01:48.715314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.073 [2024-11-20 15:01:48.732513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.073 [2024-11-20 15:01:48.732547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.073 [2024-11-20 15:01:48.748024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.073 [2024-11-20 15:01:48.748058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.073 [2024-11-20 15:01:48.765241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.073 [2024-11-20 15:01:48.765276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.073 [2024-11-20 15:01:48.782890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.073 [2024-11-20 15:01:48.782943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.073 [2024-11-20 15:01:48.797015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.073 [2024-11-20 15:01:48.797072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.073 [2024-11-20 15:01:48.812992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.073 [2024-11-20 15:01:48.813054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.073 [2024-11-20 15:01:48.829236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.073 [2024-11-20 15:01:48.829300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.073 [2024-11-20 15:01:48.846539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.073 [2024-11-20 15:01:48.846613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.073 [2024-11-20 15:01:48.862418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.073 [2024-11-20 15:01:48.862484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.332 [2024-11-20 15:01:48.878870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.332 [2024-11-20 15:01:48.878932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.332 [2024-11-20 15:01:48.895063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.332 [2024-11-20 15:01:48.895117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.332 [2024-11-20 15:01:48.912187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.332 [2024-11-20 15:01:48.912228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.332 [2024-11-20 15:01:48.921840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.332 [2024-11-20 15:01:48.921908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.332 [2024-11-20 15:01:48.937311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.332 [2024-11-20 15:01:48.937383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.332 [2024-11-20 15:01:48.953504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.332 [2024-11-20 15:01:48.953563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.332 [2024-11-20 15:01:48.970825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.332 [2024-11-20 15:01:48.970859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.332 [2024-11-20 15:01:48.988192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.332 [2024-11-20 15:01:48.988239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.332 [2024-11-20 15:01:49.002671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.332 [2024-11-20 15:01:49.002704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.332 [2024-11-20 15:01:49.018303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.332 [2024-11-20 15:01:49.018350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.332 [2024-11-20 15:01:49.034839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.332 [2024-11-20 15:01:49.034871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.332 [2024-11-20 15:01:49.052500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.332 [2024-11-20 15:01:49.052548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.333 [2024-11-20 15:01:49.068969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.333 [2024-11-20 15:01:49.069018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.333 [2024-11-20 15:01:49.085523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.333 [2024-11-20 15:01:49.085577] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.333 [2024-11-20 15:01:49.101546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.333 [2024-11-20 15:01:49.101619] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.333 [2024-11-20 15:01:49.111306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.333 [2024-11-20 15:01:49.111377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.333 [2024-11-20 15:01:49.127305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.333 [2024-11-20 15:01:49.127367] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.590 [2024-11-20 15:01:49.143365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.590 [2024-11-20 15:01:49.143433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.590 [2024-11-20 15:01:49.161175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.590 [2024-11-20 15:01:49.161234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.590 [2024-11-20 15:01:49.177893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.590 [2024-11-20 15:01:49.177926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.590 [2024-11-20 15:01:49.193316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.590 [2024-11-20 15:01:49.193364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.590 [2024-11-20 15:01:49.211044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.590 [2024-11-20 15:01:49.211092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.590 [2024-11-20 15:01:49.225458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.590 [2024-11-20 15:01:49.225507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.590 [2024-11-20 15:01:49.241269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.590 [2024-11-20 15:01:49.241318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.590 [2024-11-20 15:01:49.258193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.590 [2024-11-20 15:01:49.258229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.590 [2024-11-20 15:01:49.275175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.590 [2024-11-20 15:01:49.275211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.590 [2024-11-20 15:01:49.291207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.590 [2024-11-20 15:01:49.291261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.590 [2024-11-20 15:01:49.308287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.590 [2024-11-20 15:01:49.308349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.590 [2024-11-20 15:01:49.324807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.590 [2024-11-20 15:01:49.324851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.590 [2024-11-20 15:01:49.341144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.590 [2024-11-20 15:01:49.341175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.590 [2024-11-20 15:01:49.357605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.590 [2024-11-20 15:01:49.357651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.590 [2024-11-20 15:01:49.374611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.590 [2024-11-20 15:01:49.374655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.590 [2024-11-20 15:01:49.391284] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.590 [2024-11-20 15:01:49.391330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.848 [2024-11-20 15:01:49.407073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.848 [2024-11-20 15:01:49.407121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.848 [2024-11-20 15:01:49.425603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.848 [2024-11-20 15:01:49.425636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.848 [2024-11-20 15:01:49.440046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.848 [2024-11-20 15:01:49.440079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.848 [2024-11-20 15:01:49.457809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.848 [2024-11-20 15:01:49.457873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.848 [2024-11-20 15:01:49.472716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.848 [2024-11-20 15:01:49.472775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.849 [2024-11-20 15:01:49.481856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.849 [2024-11-20 15:01:49.481909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.849 [2024-11-20 15:01:49.498564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.849 [2024-11-20 15:01:49.498623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.849 [2024-11-20 15:01:49.514318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.849 [2024-11-20 15:01:49.514350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.849 [2024-11-20 15:01:49.525841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.849 [2024-11-20 15:01:49.525875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.849 [2024-11-20 15:01:49.543281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.849 [2024-11-20 15:01:49.543314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.849 [2024-11-20 15:01:49.558138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.849 [2024-11-20 15:01:49.558201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.849 [2024-11-20 15:01:49.574321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.849 [2024-11-20 15:01:49.574370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.849 [2024-11-20 15:01:49.590368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.849 [2024-11-20 15:01:49.590404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.849 [2024-11-20 15:01:49.599444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.849 [2024-11-20 15:01:49.599477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.849 [2024-11-20 15:01:49.615706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.849 [2024-11-20 15:01:49.615761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.849 [2024-11-20 15:01:49.634761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.849 [2024-11-20 15:01:49.634794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:18.849 [2024-11-20 15:01:49.649892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:18.849 [2024-11-20 15:01:49.649925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.108 [2024-11-20 15:01:49.659410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.108 [2024-11-20 15:01:49.659443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.108 [2024-11-20 15:01:49.674300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.108 [2024-11-20 15:01:49.674347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.108 [2024-11-20 15:01:49.690692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.108 [2024-11-20 15:01:49.690728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.108 [2024-11-20 15:01:49.707366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.108 [2024-11-20 15:01:49.707425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.108 [2024-11-20 15:01:49.723812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.108 [2024-11-20 15:01:49.723870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.108 [2024-11-20 15:01:49.739655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.108 [2024-11-20 15:01:49.739688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.108 [2024-11-20 15:01:49.758519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.108 [2024-11-20 15:01:49.758554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.108 [2024-11-20 15:01:49.773080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.108 [2024-11-20 15:01:49.773128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.108 [2024-11-20 15:01:49.790476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.108 [2024-11-20 15:01:49.790524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.108 [2024-11-20 15:01:49.806944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.108 [2024-11-20 15:01:49.806978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.108 [2024-11-20 15:01:49.824214] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.108 [2024-11-20 15:01:49.824261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.108 [2024-11-20 15:01:49.838963] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.108 [2024-11-20 15:01:49.839012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.108 [2024-11-20 15:01:49.856484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.108 [2024-11-20 15:01:49.856532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.108 [2024-11-20 15:01:49.872165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.108 [2024-11-20 15:01:49.872215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.108 [2024-11-20 15:01:49.890090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.108 [2024-11-20 15:01:49.890123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.108 [2024-11-20 15:01:49.904597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.108 [2024-11-20 15:01:49.904631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.368 [2024-11-20 15:01:49.921154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.368 [2024-11-20 15:01:49.921209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.368 [2024-11-20 15:01:49.937112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.368 [2024-11-20 15:01:49.937160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.368 [2024-11-20 15:01:49.955989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.368 [2024-11-20 15:01:49.956037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.368 [2024-11-20 15:01:49.970816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.368 [2024-11-20 15:01:49.970861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.368 [2024-11-20 15:01:49.988204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.368 [2024-11-20 15:01:49.988257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.368 [2024-11-20 15:01:50.004233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.368 [2024-11-20 15:01:50.004294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.368 [2024-11-20 15:01:50.021390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.368 [2024-11-20 15:01:50.021439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.368 [2024-11-20 15:01:50.038248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.368 [2024-11-20 15:01:50.038312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.368 [2024-11-20 15:01:50.050344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.368 [2024-11-20 15:01:50.050395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.368 [2024-11-20 15:01:50.062200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.368 [2024-11-20 15:01:50.062254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.368 [2024-11-20 15:01:50.076997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.368 [2024-11-20 15:01:50.077045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.368 [2024-11-20 15:01:50.088664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.368 [2024-11-20 15:01:50.088738] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.368 [2024-11-20 15:01:50.104451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.368 [2024-11-20 15:01:50.104499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.368 [2024-11-20 15:01:50.121291] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.368 [2024-11-20 15:01:50.121342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.368 [2024-11-20 15:01:50.137466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.368 [2024-11-20 15:01:50.137534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.368 [2024-11-20 15:01:50.152678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.368 [2024-11-20 15:01:50.152740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.368 [2024-11-20 15:01:50.168370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.368 [2024-11-20 15:01:50.168405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.628 [2024-11-20 15:01:50.186848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.628 [2024-11-20 15:01:50.186909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.628 [2024-11-20 15:01:50.200471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.628 [2024-11-20 15:01:50.200535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.628 [2024-11-20 15:01:50.216777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.628 [2024-11-20 15:01:50.216834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.628 [2024-11-20 15:01:50.233409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.628 [2024-11-20 15:01:50.233473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.628 [2024-11-20 15:01:50.250920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.628 [2024-11-20 15:01:50.250980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.628 [2024-11-20 15:01:50.265790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.628 [2024-11-20 15:01:50.265848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.628 [2024-11-20 15:01:50.283096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.628 [2024-11-20 15:01:50.283154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.628 [2024-11-20 15:01:50.299129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.628 [2024-11-20 15:01:50.299181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.628 [2024-11-20 15:01:50.308266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.628 [2024-11-20 15:01:50.308315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.628 [2024-11-20 15:01:50.324695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.628 [2024-11-20 15:01:50.324740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.628 [2024-11-20 15:01:50.334975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.628 [2024-11-20 15:01:50.335007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.628 [2024-11-20 15:01:50.349464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.628 [2024-11-20 15:01:50.349504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.628 [2024-11-20 15:01:50.365095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.628 [2024-11-20 15:01:50.365129] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.628 [2024-11-20 15:01:50.374334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.628 [2024-11-20 15:01:50.374369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.628 [2024-11-20 15:01:50.390538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.628 [2024-11-20 15:01:50.390588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.628 [2024-11-20 15:01:50.400370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.628 [2024-11-20 15:01:50.400410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.628 [2024-11-20 15:01:50.411432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.628 [2024-11-20 15:01:50.411481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.628 [2024-11-20 15:01:50.427184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.628 [2024-11-20 15:01:50.427251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.887 [2024-11-20 15:01:50.444236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.887 [2024-11-20 15:01:50.444270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.887 [2024-11-20 15:01:50.458899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.887 [2024-11-20 15:01:50.458932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.887 [2024-11-20 15:01:50.475542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.887 [2024-11-20 15:01:50.475576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.887 [2024-11-20 15:01:50.491088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.887 [2024-11-20 15:01:50.491153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.887 [2024-11-20 15:01:50.500357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.887 [2024-11-20 15:01:50.500406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.887 [2024-11-20 15:01:50.516912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.887 [2024-11-20 15:01:50.516949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.887 [2024-11-20 15:01:50.533607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.887 [2024-11-20 15:01:50.533659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.887 [2024-11-20 15:01:50.551203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.887 [2024-11-20 15:01:50.551256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.887 [2024-11-20 15:01:50.565915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.887 [2024-11-20 15:01:50.565982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.887 [2024-11-20 15:01:50.575561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.887 [2024-11-20 15:01:50.575598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.887 [2024-11-20 15:01:50.592022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.887 [2024-11-20 15:01:50.592076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.887 [2024-11-20 15:01:50.609571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.887 [2024-11-20 15:01:50.609642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.887 [2024-11-20 15:01:50.624690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.887 [2024-11-20 15:01:50.624737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.887 [2024-11-20 15:01:50.633933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.887 [2024-11-20 15:01:50.633967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.887 [2024-11-20 15:01:50.647500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.887 [2024-11-20 15:01:50.647533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.887 [2024-11-20 15:01:50.657532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.887 [2024-11-20 15:01:50.657573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.888 [2024-11-20 15:01:50.672387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.888 [2024-11-20 15:01:50.672437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.888 [2024-11-20 15:01:50.688939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.888 [2024-11-20 15:01:50.688973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.147 [2024-11-20 15:01:50.706109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.147 [2024-11-20 15:01:50.706156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.147 [2024-11-20 15:01:50.721924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.147 [2024-11-20 15:01:50.721958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.147 [2024-11-20 15:01:50.739143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.147 [2024-11-20 15:01:50.739181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.147 [2024-11-20 15:01:50.756175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.147 [2024-11-20 15:01:50.756226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.147 [2024-11-20 15:01:50.771793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.147 [2024-11-20 15:01:50.771854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.147 [2024-11-20 15:01:50.790884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.147 [2024-11-20 15:01:50.790917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.147 [2024-11-20 15:01:50.805422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.147 [2024-11-20 15:01:50.805455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.147 [2024-11-20 15:01:50.815597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.147 [2024-11-20 15:01:50.815630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.147 [2024-11-20 15:01:50.827002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.147 [2024-11-20 15:01:50.827036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.147 [2024-11-20 15:01:50.839162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.147 [2024-11-20 15:01:50.839212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.147 [2024-11-20 15:01:50.848730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.147 [2024-11-20 15:01:50.848768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.147 [2024-11-20 15:01:50.865350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.147 [2024-11-20 15:01:50.865397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.147 [2024-11-20 15:01:50.884442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.147 [2024-11-20 15:01:50.884479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.147 [2024-11-20 15:01:50.899335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.147 [2024-11-20 15:01:50.899373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.147 [2024-11-20 15:01:50.918298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.148 [2024-11-20 15:01:50.918371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.148 [2024-11-20 15:01:50.933198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.148 [2024-11-20 15:01:50.933232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.148 [2024-11-20 15:01:50.942738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.148 [2024-11-20 15:01:50.942773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.406 [2024-11-20 15:01:50.958614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.406 [2024-11-20 15:01:50.958675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.406 [2024-11-20 15:01:50.975227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.406 [2024-11-20 15:01:50.975298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.406 [2024-11-20 15:01:50.992656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.406 [2024-11-20 15:01:50.992730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.406 [2024-11-20 15:01:51.007599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.406 [2024-11-20 15:01:51.007648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.406 [2024-11-20 15:01:51.017452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.406 [2024-11-20 15:01:51.017485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.406 [2024-11-20 15:01:51.033013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.406 [2024-11-20 15:01:51.033046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.406 [2024-11-20 15:01:51.049402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.406 [2024-11-20 15:01:51.049436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.406 [2024-11-20 15:01:51.059006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.406 [2024-11-20 15:01:51.059040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.406 [2024-11-20 15:01:51.073862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.406 [2024-11-20 15:01:51.073894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.406 [2024-11-20 15:01:51.091235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.406 [2024-11-20 15:01:51.091285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.406 [2024-11-20 15:01:51.106992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.406 [2024-11-20 15:01:51.107090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.406 [2024-11-20 15:01:51.124121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.406 [2024-11-20 15:01:51.124169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.406 [2024-11-20 15:01:51.138377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.406 [2024-11-20 15:01:51.138409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.406 [2024-11-20 15:01:51.154139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.407 [2024-11-20 15:01:51.154188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.407 [2024-11-20 15:01:51.170888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.407 [2024-11-20 15:01:51.170922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.407 [2024-11-20 15:01:51.187835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.407 [2024-11-20 15:01:51.187886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.407 [2024-11-20 15:01:51.203889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.407 [2024-11-20 15:01:51.203934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.665 [2024-11-20 15:01:51.220413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.665 [2024-11-20 15:01:51.220448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.665 [2024-11-20 15:01:51.236449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.665 [2024-11-20 15:01:51.236495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.665 [2024-11-20 15:01:51.255484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.665 [2024-11-20 15:01:51.255545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.665 [2024-11-20 15:01:51.269949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.665 [2024-11-20 15:01:51.269993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.665 [2024-11-20 15:01:51.285898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.665 [2024-11-20 15:01:51.285934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.665 [2024-11-20 15:01:51.302418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.665 [2024-11-20 15:01:51.302456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.665 [2024-11-20 15:01:51.319765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.665 [2024-11-20 15:01:51.319806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.665 [2024-11-20 15:01:51.334267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.665 [2024-11-20 15:01:51.334309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.665 [2024-11-20 15:01:51.351415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.665 [2024-11-20 15:01:51.351459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.665 [2024-11-20 15:01:51.366191] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.665 [2024-11-20 15:01:51.366230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.665 [2024-11-20 15:01:51.375822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.665 [2024-11-20 15:01:51.375855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.665 [2024-11-20 15:01:51.391898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.665 [2024-11-20 15:01:51.391936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.665 [2024-11-20 15:01:51.403512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.665 [2024-11-20 15:01:51.403547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.665 [2024-11-20 15:01:51.420510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.665 [2024-11-20 15:01:51.420543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.665 [2024-11-20 15:01:51.437061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.665 [2024-11-20 15:01:51.437112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.665 [2024-11-20 15:01:51.454132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.665 [2024-11-20 15:01:51.454185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.945 [2024-11-20 15:01:51.469952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.945 [2024-11-20 15:01:51.469994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.945 [2024-11-20 15:01:51.488504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.945 [2024-11-20 15:01:51.488541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.945 [2024-11-20 15:01:51.498874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.945 [2024-11-20 15:01:51.498909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.945 [2024-11-20 15:01:51.513322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.945 [2024-11-20 15:01:51.513357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.945 [2024-11-20 15:01:51.522972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.945 [2024-11-20 15:01:51.523006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.945 [2024-11-20 15:01:51.538633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.945 [2024-11-20 15:01:51.538680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.945 [2024-11-20 15:01:51.556984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.945 [2024-11-20 15:01:51.557021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.945 [2024-11-20 15:01:51.571179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.945 [2024-11-20 15:01:51.571216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.945 [2024-11-20 15:01:51.586610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.945 [2024-11-20 15:01:51.586670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.945 [2024-11-20 15:01:51.604595] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.945 [2024-11-20 15:01:51.604664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.945 [2024-11-20 15:01:51.619166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.945 [2024-11-20 15:01:51.619202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.945 [2024-11-20 15:01:51.628098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.945 [2024-11-20 15:01:51.628133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.945 [2024-11-20 15:01:51.644368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.945 [2024-11-20 15:01:51.644407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.945 [2024-11-20 15:01:51.661348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.945 [2024-11-20 15:01:51.661409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.945 [2024-11-20 15:01:51.678198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.945 [2024-11-20 15:01:51.678256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.945 [2024-11-20 15:01:51.695532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.945 [2024-11-20 15:01:51.695602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.945 [2024-11-20 15:01:51.709901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.945 [2024-11-20 15:01:51.709952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.945 [2024-11-20 15:01:51.719273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.945 [2024-11-20 15:01:51.719311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.247 [2024-11-20 15:01:51.733715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.247 [2024-11-20 15:01:51.733776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.247 [2024-11-20 15:01:51.750616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.247 [2024-11-20 15:01:51.750683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.247 [2024-11-20 15:01:51.767151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.247 [2024-11-20 15:01:51.767208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.247 [2024-11-20 15:01:51.784473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.247 [2024-11-20 15:01:51.784515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.247 [2024-11-20 15:01:51.801015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.247 [2024-11-20 15:01:51.801075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.247 [2024-11-20 15:01:51.817459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.247 [2024-11-20 15:01:51.817508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.247 [2024-11-20 15:01:51.836204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.247 [2024-11-20 15:01:51.836244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.247 [2024-11-20 15:01:51.850580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.247 [2024-11-20 15:01:51.850632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.247 [2024-11-20 15:01:51.866121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.247 [2024-11-20 15:01:51.866155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.247 [2024-11-20 15:01:51.875296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.247 [2024-11-20 15:01:51.875331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.247 [2024-11-20 15:01:51.891394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.247 [2024-11-20 15:01:51.891433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.247 [2024-11-20 15:01:51.908464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.247 [2024-11-20 15:01:51.908524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.247 [2024-11-20 15:01:51.926744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.247 [2024-11-20 15:01:51.926810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.247 [2024-11-20 15:01:51.941074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.247 [2024-11-20 15:01:51.941135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.247 [2024-11-20 15:01:51.958076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.247 [2024-11-20 15:01:51.958124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.247 [2024-11-20 15:01:51.973982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.247 [2024-11-20 15:01:51.974048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.247 [2024-11-20 15:01:51.992737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.247 [2024-11-20 15:01:51.992794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.247 [2024-11-20 15:01:52.007134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.247 [2024-11-20 15:01:52.007183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.247 [2024-11-20 15:01:52.024310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.247 [2024-11-20 15:01:52.024378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.247 [2024-11-20 15:01:52.039102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.247 [2024-11-20 15:01:52.039160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.506 [2024-11-20 15:01:52.055241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.506 [2024-11-20 15:01:52.055298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.506 [2024-11-20 15:01:52.065099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.506 [2024-11-20 15:01:52.065144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.506 [2024-11-20 15:01:52.080927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.506 [2024-11-20 15:01:52.080989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.506 [2024-11-20 15:01:52.098503] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.506 [2024-11-20 15:01:52.098568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.506 [2024-11-20 15:01:52.114575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.506 [2024-11-20 15:01:52.114658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.506 [2024-11-20 15:01:52.131912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.506 [2024-11-20 15:01:52.131960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.506 [2024-11-20 15:01:52.148975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.506 [2024-11-20 15:01:52.149014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.506 [2024-11-20 15:01:52.164773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.506 [2024-11-20 15:01:52.164808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.506 [2024-11-20 15:01:52.183046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.506 [2024-11-20 15:01:52.183086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.506 [2024-11-20 15:01:52.197412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.506 [2024-11-20 15:01:52.197452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.506 [2024-11-20 15:01:52.212798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.506 [2024-11-20 15:01:52.212835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.506 [2024-11-20 15:01:52.230037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.506 [2024-11-20 15:01:52.230083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.506 [2024-11-20 15:01:52.248554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.507 [2024-11-20 15:01:52.248599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.507 [2024-11-20 15:01:52.263575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.507 [2024-11-20 15:01:52.263633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.507 [2024-11-20 15:01:52.280307] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.507 [2024-11-20 15:01:52.280353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.507 [2024-11-20 15:01:52.297544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.507 [2024-11-20 15:01:52.297584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.766 [2024-11-20 15:01:52.312930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.766 [2024-11-20 15:01:52.312969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.766 [2024-11-20 15:01:52.331306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.766 [2024-11-20 15:01:52.331343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.766 [2024-11-20 15:01:52.346196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.766 [2024-11-20 15:01:52.346245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.766 [2024-11-20 15:01:52.355517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.766 [2024-11-20 15:01:52.355567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.766 [2024-11-20 15:01:52.371079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.766 [2024-11-20 15:01:52.371126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.766 [2024-11-20 15:01:52.388343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.766 [2024-11-20 15:01:52.388391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.766 [2024-11-20 15:01:52.403008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.766 [2024-11-20 15:01:52.403065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.766 [2024-11-20 15:01:52.412919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.766 [2024-11-20 15:01:52.412955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.766 [2024-11-20 15:01:52.426014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.766 [2024-11-20 15:01:52.426049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.766 [2024-11-20 15:01:52.440488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.766 [2024-11-20 15:01:52.440526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.766 [2024-11-20 15:01:52.455751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.766 [2024-11-20 15:01:52.455786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.766 [2024-11-20 15:01:52.473569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.766 [2024-11-20 15:01:52.473608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.766 [2024-11-20 15:01:52.489061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.766 [2024-11-20 15:01:52.489108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.766 [2024-11-20 15:01:52.507634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.766 [2024-11-20 15:01:52.507688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.766 [2024-11-20 15:01:52.523103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.766 [2024-11-20 15:01:52.523155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.766 [2024-11-20 15:01:52.542076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.766 [2024-11-20 15:01:52.542128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.766 [2024-11-20 15:01:52.556549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.766 [2024-11-20 15:01:52.556595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.025 [2024-11-20 15:01:52.572047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.025 [2024-11-20 15:01:52.572087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.025 [2024-11-20 15:01:52.581147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.025 [2024-11-20 15:01:52.581188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.025 [2024-11-20 15:01:52.597520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.025 [2024-11-20 15:01:52.597563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.025 [2024-11-20 15:01:52.616761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.025 [2024-11-20 15:01:52.616825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.025 [2024-11-20 15:01:52.631156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.025 [2024-11-20 15:01:52.631240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.026 [2024-11-20 15:01:52.647548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.026 [2024-11-20 15:01:52.647603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.026 [2024-11-20 15:01:52.664576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.026 [2024-11-20 15:01:52.664628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.026 [2024-11-20 15:01:52.680587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.026 [2024-11-20 15:01:52.680630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.026 [2024-11-20 15:01:52.699886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.026 [2024-11-20 15:01:52.699918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.026 [2024-11-20 15:01:52.714917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.026 [2024-11-20 15:01:52.714951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.026 [2024-11-20 15:01:52.733556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.026 [2024-11-20 15:01:52.733590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.026 [2024-11-20 15:01:52.747781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.026 [2024-11-20 15:01:52.747814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.026 [2024-11-20 15:01:52.765087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.026 [2024-11-20 15:01:52.765119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.026 [2024-11-20 15:01:52.781395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.026 [2024-11-20 15:01:52.781450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.026 [2024-11-20 15:01:52.798924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.026 [2024-11-20 15:01:52.798962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.026 [2024-11-20 15:01:52.813546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.026 [2024-11-20 15:01:52.813584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.285 [2024-11-20 15:01:52.829165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.285 [2024-11-20 15:01:52.829200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.285 [2024-11-20 15:01:52.847470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.285 [2024-11-20 15:01:52.847508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.285 [2024-11-20 15:01:52.863084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.285 [2024-11-20 15:01:52.863119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.285 [2024-11-20 15:01:52.879425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.285 [2024-11-20 15:01:52.879463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.285 [2024-11-20 15:01:52.896370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.285 [2024-11-20 15:01:52.896407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.285 [2024-11-20 15:01:52.912357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.285 [2024-11-20 15:01:52.912396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.285 [2024-11-20 15:01:52.929585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.285 [2024-11-20 15:01:52.929634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.285 [2024-11-20 15:01:52.946869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.285 [2024-11-20 15:01:52.946908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.285 [2024-11-20 15:01:52.963921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.285 [2024-11-20 15:01:52.963980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.285 [2024-11-20 15:01:52.980132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.285 [2024-11-20 15:01:52.980192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.285 [2024-11-20 15:01:52.997305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.285 [2024-11-20 15:01:52.997341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.285 [2024-11-20 15:01:53.014430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.285 [2024-11-20 15:01:53.014476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.285 [2024-11-20 15:01:53.030031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.285 [2024-11-20 15:01:53.030067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.285 [2024-11-20 15:01:53.039736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.285 [2024-11-20 15:01:53.039768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.285 [2024-11-20 15:01:53.055464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.285 [2024-11-20 15:01:53.055499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.285 [2024-11-20 15:01:53.071117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.285 [2024-11-20 15:01:53.071156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.545 [2024-11-20 15:01:53.090221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.545 [2024-11-20 15:01:53.090270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.545 [2024-11-20 15:01:53.100604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.545 [2024-11-20 15:01:53.100667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.545 00:11:22.545 Latency(us) 00:11:22.545 [2024-11-20T15:01:53.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:22.545 [2024-11-20T15:01:53.349Z] Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:22.545 Nvme1n1 : 5.01 11811.28 92.28 0.00 0.00 10824.01 4468.36 21448.15 00:11:22.545 [2024-11-20T15:01:53.349Z] =================================================================================================================== 00:11:22.545 [2024-11-20T15:01:53.349Z] Total : 11811.28 92.28 0.00 0.00 10824.01 4468.36 21448.15 00:11:22.545 [2024-11-20 15:01:53.112588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.545 [2024-11-20 15:01:53.112633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.545 [2024-11-20 15:01:53.124615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.545 [2024-11-20 15:01:53.124679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.545 [2024-11-20 15:01:53.136641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.545 [2024-11-20 15:01:53.136711] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.545 [2024-11-20 15:01:53.148628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.545 [2024-11-20 15:01:53.148707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.545 [2024-11-20 15:01:53.160614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.545 [2024-11-20 15:01:53.160666] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.545 [2024-11-20 15:01:53.172635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.545 [2024-11-20 15:01:53.172719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.545 [2024-11-20 15:01:53.184608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.545 [2024-11-20 15:01:53.184690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.545 [2024-11-20 15:01:53.196643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.545 [2024-11-20 15:01:53.196716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.545 [2024-11-20 15:01:53.208620] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.545 [2024-11-20 15:01:53.208706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.545 [2024-11-20 15:01:53.220654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.545 [2024-11-20 15:01:53.220721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.545 [2024-11-20 15:01:53.232616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.545 [2024-11-20 15:01:53.232657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.545 [2024-11-20 15:01:53.244602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.545 [2024-11-20 15:01:53.244628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.545 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (74722) - No such process 00:11:22.545 15:01:53 -- target/zcopy.sh@49 -- # wait 74722 00:11:22.545 15:01:53 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.545 15:01:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.545 15:01:53 -- common/autotest_common.sh@10 -- # set +x 00:11:22.545 15:01:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.545 15:01:53 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:22.545 15:01:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.545 15:01:53 -- common/autotest_common.sh@10 -- # set +x 00:11:22.545 delay0 00:11:22.545 15:01:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.545 15:01:53 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:22.545 15:01:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.545 15:01:53 -- common/autotest_common.sh@10 -- # set +x 00:11:22.545 15:01:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.545 15:01:53 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:22.804 [2024-11-20 15:01:53.440289] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:29.383 Initializing NVMe Controllers 00:11:29.383 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:29.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:29.383 Initialization complete. Launching workers. 00:11:29.384 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 79 00:11:29.384 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 366, failed to submit 33 00:11:29.384 success 257, unsuccess 109, failed 0 00:11:29.384 15:01:59 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:29.384 15:01:59 -- target/zcopy.sh@60 -- # nvmftestfini 00:11:29.384 15:01:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:29.384 15:01:59 -- nvmf/common.sh@116 -- # sync 00:11:29.384 15:01:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:29.384 15:01:59 -- nvmf/common.sh@119 -- # set +e 00:11:29.384 15:01:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:29.384 15:01:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:29.384 rmmod nvme_tcp 00:11:29.384 rmmod nvme_fabrics 00:11:29.384 rmmod nvme_keyring 00:11:29.384 15:01:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:29.384 15:01:59 -- nvmf/common.sh@123 -- # set -e 00:11:29.384 15:01:59 -- nvmf/common.sh@124 -- # return 0 00:11:29.384 15:01:59 -- nvmf/common.sh@477 -- # '[' -n 74579 ']' 00:11:29.384 15:01:59 -- nvmf/common.sh@478 -- # killprocess 74579 00:11:29.384 15:01:59 -- common/autotest_common.sh@936 -- # '[' -z 74579 ']' 00:11:29.384 15:01:59 -- common/autotest_common.sh@940 -- # kill -0 74579 00:11:29.384 15:01:59 -- common/autotest_common.sh@941 -- # uname 00:11:29.384 15:01:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:29.384 15:01:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74579 00:11:29.384 15:01:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:29.384 killing process with pid 74579 00:11:29.384 15:01:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:29.384 15:01:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74579' 00:11:29.384 15:01:59 -- common/autotest_common.sh@955 -- # kill 74579 00:11:29.384 15:01:59 -- common/autotest_common.sh@960 -- # wait 74579 00:11:29.384 15:01:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:29.384 15:01:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:29.384 15:01:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:29.384 15:01:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:29.384 15:01:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:29.384 15:01:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.384 15:01:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.384 15:01:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.384 15:01:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:29.384 00:11:29.384 real 0m23.525s 00:11:29.384 user 0m39.127s 00:11:29.384 sys 0m6.327s 00:11:29.384 15:01:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:29.384 ************************************ 00:11:29.384 END TEST nvmf_zcopy 00:11:29.384 ************************************ 00:11:29.384 15:01:59 -- common/autotest_common.sh@10 -- # set +x 00:11:29.384 15:01:59 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:29.384 15:01:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:29.384 15:01:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:29.384 15:01:59 -- common/autotest_common.sh@10 -- # set +x 00:11:29.384 ************************************ 00:11:29.384 START TEST nvmf_nmic 00:11:29.384 ************************************ 00:11:29.384 15:01:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:29.384 * Looking for test storage... 00:11:29.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:29.384 15:01:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:29.384 15:01:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:29.384 15:01:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:29.384 15:02:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:29.384 15:02:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:29.384 15:02:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:29.384 15:02:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:29.384 15:02:00 -- scripts/common.sh@335 -- # IFS=.-: 00:11:29.384 15:02:00 -- scripts/common.sh@335 -- # read -ra ver1 00:11:29.384 15:02:00 -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.384 15:02:00 -- scripts/common.sh@336 -- # read -ra ver2 00:11:29.384 15:02:00 -- scripts/common.sh@337 -- # local 'op=<' 00:11:29.384 15:02:00 -- scripts/common.sh@339 -- # ver1_l=2 00:11:29.384 15:02:00 -- scripts/common.sh@340 -- # ver2_l=1 00:11:29.384 15:02:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:29.384 15:02:00 -- scripts/common.sh@343 -- # case "$op" in 00:11:29.384 15:02:00 -- scripts/common.sh@344 -- # : 1 00:11:29.384 15:02:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:29.384 15:02:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.384 15:02:00 -- scripts/common.sh@364 -- # decimal 1 00:11:29.384 15:02:00 -- scripts/common.sh@352 -- # local d=1 00:11:29.384 15:02:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.384 15:02:00 -- scripts/common.sh@354 -- # echo 1 00:11:29.384 15:02:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:29.384 15:02:00 -- scripts/common.sh@365 -- # decimal 2 00:11:29.384 15:02:00 -- scripts/common.sh@352 -- # local d=2 00:11:29.384 15:02:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.384 15:02:00 -- scripts/common.sh@354 -- # echo 2 00:11:29.384 15:02:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:29.384 15:02:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:29.384 15:02:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:29.384 15:02:00 -- scripts/common.sh@367 -- # return 0 00:11:29.384 15:02:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.384 15:02:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:29.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.384 --rc genhtml_branch_coverage=1 00:11:29.384 --rc genhtml_function_coverage=1 00:11:29.384 --rc genhtml_legend=1 00:11:29.384 --rc geninfo_all_blocks=1 00:11:29.384 --rc geninfo_unexecuted_blocks=1 00:11:29.384 00:11:29.384 ' 00:11:29.384 15:02:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:29.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.384 --rc genhtml_branch_coverage=1 00:11:29.384 --rc genhtml_function_coverage=1 00:11:29.384 --rc genhtml_legend=1 00:11:29.384 --rc geninfo_all_blocks=1 00:11:29.384 --rc geninfo_unexecuted_blocks=1 00:11:29.384 00:11:29.384 ' 00:11:29.384 15:02:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:29.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.384 --rc genhtml_branch_coverage=1 00:11:29.384 --rc genhtml_function_coverage=1 00:11:29.384 --rc genhtml_legend=1 00:11:29.384 --rc geninfo_all_blocks=1 00:11:29.384 --rc geninfo_unexecuted_blocks=1 00:11:29.384 00:11:29.384 ' 00:11:29.384 15:02:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:29.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.384 --rc genhtml_branch_coverage=1 00:11:29.384 --rc genhtml_function_coverage=1 00:11:29.384 --rc genhtml_legend=1 00:11:29.384 --rc geninfo_all_blocks=1 00:11:29.384 --rc geninfo_unexecuted_blocks=1 00:11:29.384 00:11:29.384 ' 00:11:29.384 15:02:00 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:29.384 15:02:00 -- nvmf/common.sh@7 -- # uname -s 00:11:29.384 15:02:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.384 15:02:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.384 15:02:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.384 15:02:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.385 15:02:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.385 15:02:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.385 15:02:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.385 15:02:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.385 15:02:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.385 15:02:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.385 15:02:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:11:29.385 15:02:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:11:29.385 15:02:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.385 15:02:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.385 15:02:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:29.385 15:02:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:29.385 15:02:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.385 15:02:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.385 15:02:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.385 15:02:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.385 15:02:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.385 15:02:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.385 15:02:00 -- paths/export.sh@5 -- # export PATH 00:11:29.385 15:02:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.385 15:02:00 -- nvmf/common.sh@46 -- # : 0 00:11:29.385 15:02:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:29.385 15:02:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:29.385 15:02:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:29.385 15:02:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.385 15:02:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.385 15:02:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:29.385 15:02:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:29.385 15:02:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:29.385 15:02:00 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:29.385 15:02:00 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:29.385 15:02:00 -- target/nmic.sh@14 -- # nvmftestinit 00:11:29.385 15:02:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:29.385 15:02:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.385 15:02:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:29.385 15:02:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:29.385 15:02:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:29.385 15:02:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.385 15:02:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.385 15:02:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.385 15:02:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:29.385 15:02:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:29.385 15:02:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:29.385 15:02:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:29.385 15:02:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:29.385 15:02:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:29.385 15:02:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.385 15:02:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.385 15:02:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:29.385 15:02:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:29.385 15:02:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:29.385 15:02:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:29.385 15:02:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:29.385 15:02:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.385 15:02:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:29.385 15:02:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:29.385 15:02:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:29.385 15:02:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:29.385 15:02:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:29.385 15:02:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:29.385 Cannot find device "nvmf_tgt_br" 00:11:29.385 15:02:00 -- nvmf/common.sh@154 -- # true 00:11:29.385 15:02:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:29.385 Cannot find device "nvmf_tgt_br2" 00:11:29.385 15:02:00 -- nvmf/common.sh@155 -- # true 00:11:29.385 15:02:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:29.385 15:02:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:29.385 Cannot find device "nvmf_tgt_br" 00:11:29.385 15:02:00 -- nvmf/common.sh@157 -- # true 00:11:29.385 15:02:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:29.385 Cannot find device "nvmf_tgt_br2" 00:11:29.385 15:02:00 -- nvmf/common.sh@158 -- # true 00:11:29.385 15:02:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:29.385 15:02:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:29.385 15:02:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:29.385 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:29.385 15:02:00 -- nvmf/common.sh@161 -- # true 00:11:29.385 15:02:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:29.385 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:29.385 15:02:00 -- nvmf/common.sh@162 -- # true 00:11:29.385 15:02:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:29.385 15:02:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:29.385 15:02:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:29.645 15:02:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:29.645 15:02:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:29.645 15:02:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:29.645 15:02:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:29.645 15:02:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:29.645 15:02:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:29.645 15:02:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:29.645 15:02:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:29.645 15:02:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:29.645 15:02:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:29.645 15:02:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:29.645 15:02:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:29.645 15:02:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:29.645 15:02:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:29.645 15:02:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:29.645 15:02:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:29.645 15:02:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:29.645 15:02:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:29.645 15:02:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:29.645 15:02:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:29.645 15:02:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:29.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:11:29.645 00:11:29.646 --- 10.0.0.2 ping statistics --- 00:11:29.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.646 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:11:29.646 15:02:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:29.646 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:29.646 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:11:29.646 00:11:29.646 --- 10.0.0.3 ping statistics --- 00:11:29.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.646 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:11:29.646 15:02:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:29.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:11:29.646 00:11:29.646 --- 10.0.0.1 ping statistics --- 00:11:29.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.646 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:11:29.646 15:02:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.646 15:02:00 -- nvmf/common.sh@421 -- # return 0 00:11:29.646 15:02:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:29.646 15:02:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.646 15:02:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:29.646 15:02:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:29.646 15:02:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.646 15:02:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:29.646 15:02:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:29.646 15:02:00 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:29.646 15:02:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:29.646 15:02:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:29.646 15:02:00 -- common/autotest_common.sh@10 -- # set +x 00:11:29.646 15:02:00 -- nvmf/common.sh@469 -- # nvmfpid=75049 00:11:29.646 15:02:00 -- nvmf/common.sh@470 -- # waitforlisten 75049 00:11:29.646 15:02:00 -- common/autotest_common.sh@829 -- # '[' -z 75049 ']' 00:11:29.646 15:02:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:29.646 15:02:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.646 15:02:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:29.646 15:02:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.646 15:02:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:29.646 15:02:00 -- common/autotest_common.sh@10 -- # set +x 00:11:29.905 [2024-11-20 15:02:00.465395] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:29.905 [2024-11-20 15:02:00.465502] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.905 [2024-11-20 15:02:00.622849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:29.905 [2024-11-20 15:02:00.670782] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:29.905 [2024-11-20 15:02:00.671112] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.905 [2024-11-20 15:02:00.671131] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.905 [2024-11-20 15:02:00.671143] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.905 [2024-11-20 15:02:00.671231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.905 [2024-11-20 15:02:00.671924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:29.905 [2024-11-20 15:02:00.671990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:29.905 [2024-11-20 15:02:00.672001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.841 15:02:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:30.841 15:02:01 -- common/autotest_common.sh@862 -- # return 0 00:11:30.841 15:02:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:30.841 15:02:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:30.841 15:02:01 -- common/autotest_common.sh@10 -- # set +x 00:11:30.841 15:02:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.841 15:02:01 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:30.841 15:02:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.841 15:02:01 -- common/autotest_common.sh@10 -- # set +x 00:11:30.841 [2024-11-20 15:02:01.615732] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.841 15:02:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.841 15:02:01 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:30.841 15:02:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.841 15:02:01 -- common/autotest_common.sh@10 -- # set +x 00:11:31.099 Malloc0 00:11:31.099 15:02:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.099 15:02:01 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:31.099 15:02:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.099 15:02:01 -- common/autotest_common.sh@10 -- # set +x 00:11:31.099 15:02:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.099 15:02:01 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:31.099 15:02:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.099 15:02:01 -- common/autotest_common.sh@10 -- # set +x 00:11:31.099 15:02:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.099 15:02:01 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:31.099 15:02:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.099 15:02:01 -- common/autotest_common.sh@10 -- # set +x 00:11:31.099 [2024-11-20 15:02:01.680520] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.099 15:02:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.099 test case1: single bdev can't be used in multiple subsystems 00:11:31.099 15:02:01 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:31.099 15:02:01 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:31.099 15:02:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.099 15:02:01 -- common/autotest_common.sh@10 -- # set +x 00:11:31.099 15:02:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.099 15:02:01 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:31.099 15:02:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.099 15:02:01 -- common/autotest_common.sh@10 -- # set +x 00:11:31.099 15:02:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.099 15:02:01 -- target/nmic.sh@28 -- # nmic_status=0 00:11:31.099 15:02:01 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:31.099 15:02:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.099 15:02:01 -- common/autotest_common.sh@10 -- # set +x 00:11:31.099 [2024-11-20 15:02:01.704350] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:31.099 [2024-11-20 15:02:01.704411] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:31.099 [2024-11-20 15:02:01.704428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.099 request: 00:11:31.099 { 00:11:31.099 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:31.099 "namespace": { 00:11:31.099 "bdev_name": "Malloc0" 00:11:31.099 }, 00:11:31.099 "method": "nvmf_subsystem_add_ns", 00:11:31.099 "req_id": 1 00:11:31.099 } 00:11:31.099 Got JSON-RPC error response 00:11:31.099 response: 00:11:31.099 { 00:11:31.099 "code": -32602, 00:11:31.099 "message": "Invalid parameters" 00:11:31.099 } 00:11:31.099 15:02:01 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:31.099 15:02:01 -- target/nmic.sh@29 -- # nmic_status=1 00:11:31.099 15:02:01 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:31.099 Adding namespace failed - expected result. 00:11:31.099 15:02:01 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:31.099 test case2: host connect to nvmf target in multiple paths 00:11:31.099 15:02:01 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:31.099 15:02:01 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:31.099 15:02:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.099 15:02:01 -- common/autotest_common.sh@10 -- # set +x 00:11:31.099 [2024-11-20 15:02:01.716598] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:31.099 15:02:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.099 15:02:01 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece --hostid=79d23ff5-0d62-4c46-bc89-ee8b51440ece -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:31.099 15:02:01 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece --hostid=79d23ff5-0d62-4c46-bc89-ee8b51440ece -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:31.356 15:02:01 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:31.356 15:02:01 -- common/autotest_common.sh@1187 -- # local i=0 00:11:31.356 15:02:01 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:31.356 15:02:01 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:11:31.356 15:02:01 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:33.256 15:02:03 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:33.256 15:02:03 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:33.256 15:02:03 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:33.256 15:02:03 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:11:33.256 15:02:03 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:33.256 15:02:03 -- common/autotest_common.sh@1197 -- # return 0 00:11:33.256 15:02:03 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:33.256 [global] 00:11:33.256 thread=1 00:11:33.256 invalidate=1 00:11:33.256 rw=write 00:11:33.256 time_based=1 00:11:33.256 runtime=1 00:11:33.256 ioengine=libaio 00:11:33.256 direct=1 00:11:33.256 bs=4096 00:11:33.256 iodepth=1 00:11:33.256 norandommap=0 00:11:33.256 numjobs=1 00:11:33.256 00:11:33.256 verify_dump=1 00:11:33.256 verify_backlog=512 00:11:33.256 verify_state_save=0 00:11:33.256 do_verify=1 00:11:33.256 verify=crc32c-intel 00:11:33.256 [job0] 00:11:33.256 filename=/dev/nvme0n1 00:11:33.256 Could not set queue depth (nvme0n1) 00:11:33.513 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:33.513 fio-3.35 00:11:33.513 Starting 1 thread 00:11:34.889 00:11:34.889 job0: (groupid=0, jobs=1): err= 0: pid=75141: Wed Nov 20 15:02:05 2024 00:11:34.889 read: IOPS=3018, BW=11.8MiB/s (12.4MB/s)(11.8MiB/1001msec) 00:11:34.889 slat (nsec): min=11517, max=51959, avg=15132.92, stdev=3822.90 00:11:34.889 clat (usec): min=139, max=296, avg=178.96, stdev=20.14 00:11:34.889 lat (usec): min=152, max=309, avg=194.09, stdev=20.65 00:11:34.889 clat percentiles (usec): 00:11:34.889 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:11:34.889 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:11:34.889 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 217], 00:11:34.889 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 289], 99.95th=[ 289], 00:11:34.889 | 99.99th=[ 297] 00:11:34.889 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:34.889 slat (usec): min=16, max=114, avg=22.94, stdev= 5.48 00:11:34.889 clat (usec): min=84, max=345, avg=108.15, stdev=16.19 00:11:34.889 lat (usec): min=103, max=383, avg=131.09, stdev=18.02 00:11:34.889 clat percentiles (usec): 00:11:34.889 | 1.00th=[ 88], 5.00th=[ 92], 10.00th=[ 94], 20.00th=[ 97], 00:11:34.889 | 30.00th=[ 99], 40.00th=[ 102], 50.00th=[ 105], 60.00th=[ 108], 00:11:34.889 | 70.00th=[ 112], 80.00th=[ 118], 90.00th=[ 128], 95.00th=[ 139], 00:11:34.889 | 99.00th=[ 163], 99.50th=[ 178], 99.90th=[ 215], 99.95th=[ 255], 00:11:34.889 | 99.99th=[ 347] 00:11:34.889 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:11:34.889 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:34.889 lat (usec) : 100=16.74%, 250=82.47%, 500=0.79% 00:11:34.889 cpu : usr=2.60%, sys=8.90%, ctx=6094, majf=0, minf=5 00:11:34.889 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:34.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:34.889 issued rwts: total=3022,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:34.889 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:34.889 00:11:34.889 Run status group 0 (all jobs): 00:11:34.889 READ: bw=11.8MiB/s (12.4MB/s), 11.8MiB/s-11.8MiB/s (12.4MB/s-12.4MB/s), io=11.8MiB (12.4MB), run=1001-1001msec 00:11:34.889 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:11:34.889 00:11:34.889 Disk stats (read/write): 00:11:34.889 nvme0n1: ios=2610/2993, merge=0/0, ticks=474/345, in_queue=819, util=91.28% 00:11:34.889 15:02:05 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:34.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:34.889 15:02:05 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:34.889 15:02:05 -- common/autotest_common.sh@1208 -- # local i=0 00:11:34.889 15:02:05 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:34.889 15:02:05 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:34.889 15:02:05 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:34.889 15:02:05 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:34.889 15:02:05 -- common/autotest_common.sh@1220 -- # return 0 00:11:34.889 15:02:05 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:34.889 15:02:05 -- target/nmic.sh@53 -- # nvmftestfini 00:11:34.889 15:02:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:34.889 15:02:05 -- nvmf/common.sh@116 -- # sync 00:11:34.889 15:02:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:34.889 15:02:05 -- nvmf/common.sh@119 -- # set +e 00:11:34.889 15:02:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:34.889 15:02:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:34.889 rmmod nvme_tcp 00:11:34.889 rmmod nvme_fabrics 00:11:34.889 rmmod nvme_keyring 00:11:34.889 15:02:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:34.889 15:02:05 -- nvmf/common.sh@123 -- # set -e 00:11:34.889 15:02:05 -- nvmf/common.sh@124 -- # return 0 00:11:34.889 15:02:05 -- nvmf/common.sh@477 -- # '[' -n 75049 ']' 00:11:34.889 15:02:05 -- nvmf/common.sh@478 -- # killprocess 75049 00:11:34.889 15:02:05 -- common/autotest_common.sh@936 -- # '[' -z 75049 ']' 00:11:34.889 15:02:05 -- common/autotest_common.sh@940 -- # kill -0 75049 00:11:34.889 15:02:05 -- common/autotest_common.sh@941 -- # uname 00:11:34.889 15:02:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:34.889 15:02:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75049 00:11:34.889 killing process with pid 75049 00:11:34.889 15:02:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:34.889 15:02:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:34.889 15:02:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75049' 00:11:34.889 15:02:05 -- common/autotest_common.sh@955 -- # kill 75049 00:11:34.889 15:02:05 -- common/autotest_common.sh@960 -- # wait 75049 00:11:35.148 15:02:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:35.148 15:02:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:35.148 15:02:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:35.148 15:02:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:35.148 15:02:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:35.148 15:02:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.148 15:02:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:35.148 15:02:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.148 15:02:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:35.148 00:11:35.148 real 0m5.936s 00:11:35.148 user 0m19.211s 00:11:35.148 sys 0m2.282s 00:11:35.148 15:02:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:35.148 ************************************ 00:11:35.148 END TEST nvmf_nmic 00:11:35.148 15:02:05 -- common/autotest_common.sh@10 -- # set +x 00:11:35.148 ************************************ 00:11:35.148 15:02:05 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:35.148 15:02:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:35.148 15:02:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:35.148 15:02:05 -- common/autotest_common.sh@10 -- # set +x 00:11:35.148 ************************************ 00:11:35.148 START TEST nvmf_fio_target 00:11:35.148 ************************************ 00:11:35.148 15:02:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:35.148 * Looking for test storage... 00:11:35.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:35.148 15:02:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:35.148 15:02:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:35.148 15:02:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:35.407 15:02:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:35.407 15:02:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:35.407 15:02:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:35.407 15:02:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:35.407 15:02:06 -- scripts/common.sh@335 -- # IFS=.-: 00:11:35.407 15:02:06 -- scripts/common.sh@335 -- # read -ra ver1 00:11:35.407 15:02:06 -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.407 15:02:06 -- scripts/common.sh@336 -- # read -ra ver2 00:11:35.407 15:02:06 -- scripts/common.sh@337 -- # local 'op=<' 00:11:35.407 15:02:06 -- scripts/common.sh@339 -- # ver1_l=2 00:11:35.407 15:02:06 -- scripts/common.sh@340 -- # ver2_l=1 00:11:35.407 15:02:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:35.407 15:02:06 -- scripts/common.sh@343 -- # case "$op" in 00:11:35.407 15:02:06 -- scripts/common.sh@344 -- # : 1 00:11:35.407 15:02:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:35.407 15:02:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.407 15:02:06 -- scripts/common.sh@364 -- # decimal 1 00:11:35.407 15:02:06 -- scripts/common.sh@352 -- # local d=1 00:11:35.407 15:02:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.407 15:02:06 -- scripts/common.sh@354 -- # echo 1 00:11:35.407 15:02:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:35.407 15:02:06 -- scripts/common.sh@365 -- # decimal 2 00:11:35.407 15:02:06 -- scripts/common.sh@352 -- # local d=2 00:11:35.407 15:02:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.407 15:02:06 -- scripts/common.sh@354 -- # echo 2 00:11:35.407 15:02:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:35.407 15:02:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:35.407 15:02:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:35.407 15:02:06 -- scripts/common.sh@367 -- # return 0 00:11:35.407 15:02:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.407 15:02:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:35.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.407 --rc genhtml_branch_coverage=1 00:11:35.407 --rc genhtml_function_coverage=1 00:11:35.407 --rc genhtml_legend=1 00:11:35.407 --rc geninfo_all_blocks=1 00:11:35.407 --rc geninfo_unexecuted_blocks=1 00:11:35.407 00:11:35.407 ' 00:11:35.407 15:02:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:35.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.407 --rc genhtml_branch_coverage=1 00:11:35.407 --rc genhtml_function_coverage=1 00:11:35.407 --rc genhtml_legend=1 00:11:35.407 --rc geninfo_all_blocks=1 00:11:35.407 --rc geninfo_unexecuted_blocks=1 00:11:35.407 00:11:35.407 ' 00:11:35.407 15:02:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:35.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.407 --rc genhtml_branch_coverage=1 00:11:35.407 --rc genhtml_function_coverage=1 00:11:35.407 --rc genhtml_legend=1 00:11:35.407 --rc geninfo_all_blocks=1 00:11:35.407 --rc geninfo_unexecuted_blocks=1 00:11:35.407 00:11:35.407 ' 00:11:35.407 15:02:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:35.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.407 --rc genhtml_branch_coverage=1 00:11:35.407 --rc genhtml_function_coverage=1 00:11:35.407 --rc genhtml_legend=1 00:11:35.407 --rc geninfo_all_blocks=1 00:11:35.407 --rc geninfo_unexecuted_blocks=1 00:11:35.407 00:11:35.407 ' 00:11:35.407 15:02:06 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:35.407 15:02:06 -- nvmf/common.sh@7 -- # uname -s 00:11:35.407 15:02:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.407 15:02:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.407 15:02:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.407 15:02:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.407 15:02:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.407 15:02:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.407 15:02:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.407 15:02:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.407 15:02:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.407 15:02:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.407 15:02:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:11:35.407 15:02:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:11:35.407 15:02:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.407 15:02:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.407 15:02:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:35.407 15:02:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:35.407 15:02:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.407 15:02:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.407 15:02:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.407 15:02:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.407 15:02:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.407 15:02:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.407 15:02:06 -- paths/export.sh@5 -- # export PATH 00:11:35.407 15:02:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.407 15:02:06 -- nvmf/common.sh@46 -- # : 0 00:11:35.407 15:02:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:35.407 15:02:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:35.407 15:02:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:35.407 15:02:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.407 15:02:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.407 15:02:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:35.407 15:02:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:35.408 15:02:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:35.408 15:02:06 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:35.408 15:02:06 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:35.408 15:02:06 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:35.408 15:02:06 -- target/fio.sh@16 -- # nvmftestinit 00:11:35.408 15:02:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:35.408 15:02:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.408 15:02:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:35.408 15:02:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:35.408 15:02:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:35.408 15:02:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.408 15:02:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:35.408 15:02:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.408 15:02:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:35.408 15:02:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:35.408 15:02:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:35.408 15:02:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:35.408 15:02:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:35.408 15:02:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:35.408 15:02:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.408 15:02:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.408 15:02:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:35.408 15:02:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:35.408 15:02:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:35.408 15:02:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:35.408 15:02:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:35.408 15:02:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.408 15:02:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:35.408 15:02:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:35.408 15:02:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:35.408 15:02:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:35.408 15:02:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:35.408 15:02:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:35.408 Cannot find device "nvmf_tgt_br" 00:11:35.408 15:02:06 -- nvmf/common.sh@154 -- # true 00:11:35.408 15:02:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:35.408 Cannot find device "nvmf_tgt_br2" 00:11:35.408 15:02:06 -- nvmf/common.sh@155 -- # true 00:11:35.408 15:02:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:35.408 15:02:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:35.408 Cannot find device "nvmf_tgt_br" 00:11:35.408 15:02:06 -- nvmf/common.sh@157 -- # true 00:11:35.408 15:02:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:35.408 Cannot find device "nvmf_tgt_br2" 00:11:35.408 15:02:06 -- nvmf/common.sh@158 -- # true 00:11:35.408 15:02:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:35.408 15:02:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:35.408 15:02:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:35.408 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:35.408 15:02:06 -- nvmf/common.sh@161 -- # true 00:11:35.408 15:02:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:35.408 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:35.408 15:02:06 -- nvmf/common.sh@162 -- # true 00:11:35.408 15:02:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:35.408 15:02:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:35.408 15:02:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:35.408 15:02:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:35.408 15:02:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:35.666 15:02:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:35.666 15:02:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:35.666 15:02:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:35.666 15:02:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:35.666 15:02:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:35.666 15:02:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:35.666 15:02:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:35.666 15:02:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:35.666 15:02:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:35.666 15:02:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:35.666 15:02:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:35.666 15:02:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:35.666 15:02:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:35.666 15:02:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:35.666 15:02:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:35.666 15:02:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:35.666 15:02:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:35.666 15:02:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:35.666 15:02:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:35.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:11:35.666 00:11:35.666 --- 10.0.0.2 ping statistics --- 00:11:35.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.666 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:11:35.666 15:02:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:35.666 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:35.666 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:11:35.666 00:11:35.666 --- 10.0.0.3 ping statistics --- 00:11:35.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.666 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:11:35.666 15:02:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:35.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:11:35.666 00:11:35.666 --- 10.0.0.1 ping statistics --- 00:11:35.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.666 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:11:35.666 15:02:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.666 15:02:06 -- nvmf/common.sh@421 -- # return 0 00:11:35.666 15:02:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:35.666 15:02:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.666 15:02:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:35.666 15:02:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:35.666 15:02:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.666 15:02:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:35.666 15:02:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:35.666 15:02:06 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:35.666 15:02:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:35.666 15:02:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:35.666 15:02:06 -- common/autotest_common.sh@10 -- # set +x 00:11:35.666 15:02:06 -- nvmf/common.sh@469 -- # nvmfpid=75325 00:11:35.666 15:02:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:35.666 15:02:06 -- nvmf/common.sh@470 -- # waitforlisten 75325 00:11:35.666 15:02:06 -- common/autotest_common.sh@829 -- # '[' -z 75325 ']' 00:11:35.666 15:02:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.666 15:02:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:35.666 15:02:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.666 15:02:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:35.666 15:02:06 -- common/autotest_common.sh@10 -- # set +x 00:11:35.666 [2024-11-20 15:02:06.466335] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:35.666 [2024-11-20 15:02:06.466459] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.925 [2024-11-20 15:02:06.609243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.925 [2024-11-20 15:02:06.649591] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:35.925 [2024-11-20 15:02:06.649780] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.925 [2024-11-20 15:02:06.649798] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.925 [2024-11-20 15:02:06.649810] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.925 [2024-11-20 15:02:06.649960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.925 [2024-11-20 15:02:06.650636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.925 [2024-11-20 15:02:06.650818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.925 [2024-11-20 15:02:06.650826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.183 15:02:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:36.183 15:02:06 -- common/autotest_common.sh@862 -- # return 0 00:11:36.183 15:02:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:36.183 15:02:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:36.183 15:02:06 -- common/autotest_common.sh@10 -- # set +x 00:11:36.183 15:02:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.183 15:02:06 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:36.442 [2024-11-20 15:02:07.099592] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.442 15:02:07 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:36.701 15:02:07 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:36.701 15:02:07 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:36.960 15:02:07 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:36.960 15:02:07 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:37.219 15:02:07 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:37.219 15:02:07 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:37.477 15:02:08 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:37.477 15:02:08 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:37.737 15:02:08 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:37.996 15:02:08 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:37.996 15:02:08 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:38.254 15:02:09 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:38.254 15:02:09 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:38.512 15:02:09 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:38.512 15:02:09 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:38.771 15:02:09 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:39.338 15:02:09 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:39.338 15:02:09 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:39.338 15:02:10 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:39.338 15:02:10 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:39.596 15:02:10 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.854 [2024-11-20 15:02:10.549409] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.854 15:02:10 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:40.112 15:02:10 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:40.371 15:02:11 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece --hostid=79d23ff5-0d62-4c46-bc89-ee8b51440ece -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:40.629 15:02:11 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:40.629 15:02:11 -- common/autotest_common.sh@1187 -- # local i=0 00:11:40.629 15:02:11 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:11:40.629 15:02:11 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:11:40.629 15:02:11 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:11:40.629 15:02:11 -- common/autotest_common.sh@1194 -- # sleep 2 00:11:42.586 15:02:13 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:11:42.586 15:02:13 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:11:42.586 15:02:13 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:11:42.586 15:02:13 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:11:42.586 15:02:13 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:11:42.586 15:02:13 -- common/autotest_common.sh@1197 -- # return 0 00:11:42.586 15:02:13 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:42.586 [global] 00:11:42.586 thread=1 00:11:42.586 invalidate=1 00:11:42.586 rw=write 00:11:42.586 time_based=1 00:11:42.586 runtime=1 00:11:42.586 ioengine=libaio 00:11:42.586 direct=1 00:11:42.586 bs=4096 00:11:42.586 iodepth=1 00:11:42.586 norandommap=0 00:11:42.586 numjobs=1 00:11:42.586 00:11:42.586 verify_dump=1 00:11:42.586 verify_backlog=512 00:11:42.586 verify_state_save=0 00:11:42.586 do_verify=1 00:11:42.586 verify=crc32c-intel 00:11:42.586 [job0] 00:11:42.586 filename=/dev/nvme0n1 00:11:42.586 [job1] 00:11:42.586 filename=/dev/nvme0n2 00:11:42.586 [job2] 00:11:42.586 filename=/dev/nvme0n3 00:11:42.586 [job3] 00:11:42.586 filename=/dev/nvme0n4 00:11:42.586 Could not set queue depth (nvme0n1) 00:11:42.586 Could not set queue depth (nvme0n2) 00:11:42.586 Could not set queue depth (nvme0n3) 00:11:42.586 Could not set queue depth (nvme0n4) 00:11:42.845 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:42.845 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:42.845 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:42.845 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:42.845 fio-3.35 00:11:42.845 Starting 4 threads 00:11:44.219 00:11:44.219 job0: (groupid=0, jobs=1): err= 0: pid=75503: Wed Nov 20 15:02:14 2024 00:11:44.219 read: IOPS=2964, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1001msec) 00:11:44.219 slat (nsec): min=11035, max=56744, avg=14112.91, stdev=4261.84 00:11:44.219 clat (usec): min=132, max=7660, avg=168.57, stdev=140.59 00:11:44.219 lat (usec): min=144, max=7672, avg=182.69, stdev=140.71 00:11:44.219 clat percentiles (usec): 00:11:44.219 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:11:44.219 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 167], 00:11:44.219 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 194], 00:11:44.219 | 99.00th=[ 243], 99.50th=[ 255], 99.90th=[ 930], 99.95th=[ 1057], 00:11:44.219 | 99.99th=[ 7635] 00:11:44.219 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:44.219 slat (usec): min=14, max=112, avg=21.55, stdev= 5.60 00:11:44.219 clat (usec): min=91, max=244, avg=124.41, stdev=12.48 00:11:44.219 lat (usec): min=110, max=357, avg=145.96, stdev=13.67 00:11:44.219 clat percentiles (usec): 00:11:44.219 | 1.00th=[ 98], 5.00th=[ 105], 10.00th=[ 110], 20.00th=[ 115], 00:11:44.219 | 30.00th=[ 119], 40.00th=[ 122], 50.00th=[ 125], 60.00th=[ 128], 00:11:44.219 | 70.00th=[ 131], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 145], 00:11:44.219 | 99.00th=[ 159], 99.50th=[ 163], 99.90th=[ 174], 99.95th=[ 182], 00:11:44.219 | 99.99th=[ 245] 00:11:44.219 bw ( KiB/s): min=12560, max=12560, per=31.04%, avg=12560.00, stdev= 0.00, samples=1 00:11:44.219 iops : min= 3140, max= 3140, avg=3140.00, stdev= 0.00, samples=1 00:11:44.219 lat (usec) : 100=1.16%, 250=98.54%, 500=0.23%, 750=0.02%, 1000=0.02% 00:11:44.219 lat (msec) : 2=0.02%, 10=0.02% 00:11:44.219 cpu : usr=2.20%, sys=8.50%, ctx=6042, majf=0, minf=7 00:11:44.219 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:44.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.219 issued rwts: total=2967,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:44.219 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:44.219 job1: (groupid=0, jobs=1): err= 0: pid=75504: Wed Nov 20 15:02:14 2024 00:11:44.219 read: IOPS=1665, BW=6661KiB/s (6821kB/s)(6668KiB/1001msec) 00:11:44.219 slat (nsec): min=8153, max=62466, avg=13924.67, stdev=6504.64 00:11:44.219 clat (usec): min=215, max=447, avg=274.49, stdev=37.73 00:11:44.219 lat (usec): min=226, max=458, avg=288.41, stdev=40.05 00:11:44.219 clat percentiles (usec): 00:11:44.219 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 241], 20.00th=[ 247], 00:11:44.219 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:11:44.219 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 334], 95.00th=[ 363], 00:11:44.219 | 99.00th=[ 396], 99.50th=[ 404], 99.90th=[ 429], 99.95th=[ 449], 00:11:44.219 | 99.99th=[ 449] 00:11:44.219 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:44.219 slat (nsec): min=10792, max=80734, avg=24151.91, stdev=10371.73 00:11:44.219 clat (usec): min=103, max=602, avg=226.30, stdev=45.38 00:11:44.219 lat (usec): min=149, max=617, avg=250.45, stdev=47.19 00:11:44.219 clat percentiles (usec): 00:11:44.219 | 1.00th=[ 174], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 198], 00:11:44.219 | 30.00th=[ 204], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 221], 00:11:44.219 | 70.00th=[ 229], 80.00th=[ 241], 90.00th=[ 281], 95.00th=[ 306], 00:11:44.219 | 99.00th=[ 437], 99.50th=[ 474], 99.90th=[ 529], 99.95th=[ 594], 00:11:44.219 | 99.99th=[ 603] 00:11:44.219 bw ( KiB/s): min= 8192, max= 8192, per=20.25%, avg=8192.00, stdev= 0.00, samples=1 00:11:44.219 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:44.219 lat (usec) : 250=57.50%, 500=42.37%, 750=0.13% 00:11:44.219 cpu : usr=1.40%, sys=6.30%, ctx=3715, majf=0, minf=11 00:11:44.219 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:44.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.219 issued rwts: total=1667,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:44.219 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:44.219 job2: (groupid=0, jobs=1): err= 0: pid=75505: Wed Nov 20 15:02:14 2024 00:11:44.219 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:44.219 slat (usec): min=11, max=112, avg=17.44, stdev= 7.22 00:11:44.219 clat (usec): min=145, max=2347, avg=184.32, stdev=45.87 00:11:44.219 lat (usec): min=157, max=2362, avg=201.76, stdev=46.55 00:11:44.219 clat percentiles (usec): 00:11:44.219 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:11:44.219 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:11:44.219 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 212], 00:11:44.219 | 99.00th=[ 241], 99.50th=[ 258], 99.90th=[ 289], 99.95th=[ 306], 00:11:44.219 | 99.99th=[ 2343] 00:11:44.219 write: IOPS=2955, BW=11.5MiB/s (12.1MB/s)(11.6MiB/1001msec); 0 zone resets 00:11:44.219 slat (usec): min=14, max=108, avg=23.32, stdev= 7.11 00:11:44.219 clat (usec): min=102, max=1939, avg=136.68, stdev=37.40 00:11:44.219 lat (usec): min=120, max=1958, avg=160.01, stdev=38.20 00:11:44.219 clat percentiles (usec): 00:11:44.219 | 1.00th=[ 111], 5.00th=[ 117], 10.00th=[ 120], 20.00th=[ 124], 00:11:44.219 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:11:44.219 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 165], 00:11:44.220 | 99.00th=[ 186], 99.50th=[ 200], 99.90th=[ 293], 99.95th=[ 562], 00:11:44.220 | 99.99th=[ 1942] 00:11:44.220 bw ( KiB/s): min=12288, max=12288, per=30.37%, avg=12288.00, stdev= 0.00, samples=1 00:11:44.220 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:44.220 lat (usec) : 250=99.57%, 500=0.38%, 750=0.02% 00:11:44.220 lat (msec) : 2=0.02%, 4=0.02% 00:11:44.220 cpu : usr=2.20%, sys=9.10%, ctx=5534, majf=0, minf=9 00:11:44.220 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:44.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.220 issued rwts: total=2560,2958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:44.220 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:44.220 job3: (groupid=0, jobs=1): err= 0: pid=75507: Wed Nov 20 15:02:14 2024 00:11:44.220 read: IOPS=1664, BW=6657KiB/s (6817kB/s)(6664KiB/1001msec) 00:11:44.220 slat (usec): min=8, max=100, avg=15.60, stdev= 7.18 00:11:44.220 clat (usec): min=171, max=559, avg=272.60, stdev=40.33 00:11:44.220 lat (usec): min=196, max=571, avg=288.20, stdev=40.64 00:11:44.220 clat percentiles (usec): 00:11:44.220 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 245], 00:11:44.220 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 269], 00:11:44.220 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 338], 95.00th=[ 367], 00:11:44.220 | 99.00th=[ 400], 99.50th=[ 408], 99.90th=[ 453], 99.95th=[ 562], 00:11:44.220 | 99.99th=[ 562] 00:11:44.220 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:44.220 slat (nsec): min=11357, max=95918, avg=27050.42, stdev=8634.11 00:11:44.220 clat (usec): min=142, max=682, avg=223.23, stdev=43.62 00:11:44.220 lat (usec): min=176, max=700, avg=250.28, stdev=46.98 00:11:44.220 clat percentiles (usec): 00:11:44.220 | 1.00th=[ 172], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 196], 00:11:44.220 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 221], 00:11:44.220 | 70.00th=[ 227], 80.00th=[ 239], 90.00th=[ 273], 95.00th=[ 302], 00:11:44.220 | 99.00th=[ 424], 99.50th=[ 457], 99.90th=[ 510], 99.95th=[ 570], 00:11:44.220 | 99.99th=[ 685] 00:11:44.220 bw ( KiB/s): min= 8192, max= 8192, per=20.25%, avg=8192.00, stdev= 0.00, samples=1 00:11:44.220 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:44.220 lat (usec) : 250=60.18%, 500=39.71%, 750=0.11% 00:11:44.220 cpu : usr=1.90%, sys=6.80%, ctx=3714, majf=0, minf=11 00:11:44.220 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:44.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:44.220 issued rwts: total=1666,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:44.220 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:44.220 00:11:44.220 Run status group 0 (all jobs): 00:11:44.220 READ: bw=34.6MiB/s (36.3MB/s), 6657KiB/s-11.6MiB/s (6817kB/s-12.1MB/s), io=34.6MiB (36.3MB), run=1001-1001msec 00:11:44.220 WRITE: bw=39.5MiB/s (41.4MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=39.6MiB (41.5MB), run=1001-1001msec 00:11:44.220 00:11:44.220 Disk stats (read/write): 00:11:44.220 nvme0n1: ios=2610/2679, merge=0/0, ticks=469/358, in_queue=827, util=88.78% 00:11:44.220 nvme0n2: ios=1573/1638, merge=0/0, ticks=448/346, in_queue=794, util=88.64% 00:11:44.220 nvme0n3: ios=2163/2560, merge=0/0, ticks=417/369, in_queue=786, util=89.10% 00:11:44.220 nvme0n4: ios=1536/1637, merge=0/0, ticks=408/381, in_queue=789, util=89.75% 00:11:44.220 15:02:14 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:44.220 [global] 00:11:44.220 thread=1 00:11:44.220 invalidate=1 00:11:44.220 rw=randwrite 00:11:44.220 time_based=1 00:11:44.220 runtime=1 00:11:44.220 ioengine=libaio 00:11:44.220 direct=1 00:11:44.220 bs=4096 00:11:44.220 iodepth=1 00:11:44.220 norandommap=0 00:11:44.220 numjobs=1 00:11:44.220 00:11:44.220 verify_dump=1 00:11:44.220 verify_backlog=512 00:11:44.220 verify_state_save=0 00:11:44.220 do_verify=1 00:11:44.220 verify=crc32c-intel 00:11:44.220 [job0] 00:11:44.220 filename=/dev/nvme0n1 00:11:44.220 [job1] 00:11:44.220 filename=/dev/nvme0n2 00:11:44.220 [job2] 00:11:44.220 filename=/dev/nvme0n3 00:11:44.220 [job3] 00:11:44.220 filename=/dev/nvme0n4 00:11:44.220 Could not set queue depth (nvme0n1) 00:11:44.220 Could not set queue depth (nvme0n2) 00:11:44.220 Could not set queue depth (nvme0n3) 00:11:44.220 Could not set queue depth (nvme0n4) 00:11:44.220 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:44.220 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:44.220 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:44.220 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:44.220 fio-3.35 00:11:44.220 Starting 4 threads 00:11:45.154 00:11:45.154 job0: (groupid=0, jobs=1): err= 0: pid=75570: Wed Nov 20 15:02:15 2024 00:11:45.154 read: IOPS=2774, BW=10.8MiB/s (11.4MB/s)(10.8MiB/1001msec) 00:11:45.154 slat (usec): min=11, max=114, avg=18.45, stdev= 6.87 00:11:45.154 clat (usec): min=134, max=1671, avg=170.63, stdev=38.42 00:11:45.154 lat (usec): min=146, max=1691, avg=189.08, stdev=39.77 00:11:45.154 clat percentiles (usec): 00:11:45.154 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:11:45.154 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:11:45.154 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 200], 00:11:45.154 | 99.00th=[ 239], 99.50th=[ 258], 99.90th=[ 676], 99.95th=[ 840], 00:11:45.154 | 99.99th=[ 1680] 00:11:45.155 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:45.155 slat (usec): min=13, max=104, avg=26.28, stdev= 8.12 00:11:45.155 clat (usec): min=91, max=246, avg=124.31, stdev=13.08 00:11:45.155 lat (usec): min=109, max=350, avg=150.59, stdev=16.37 00:11:45.155 clat percentiles (usec): 00:11:45.155 | 1.00th=[ 97], 5.00th=[ 104], 10.00th=[ 109], 20.00th=[ 114], 00:11:45.155 | 30.00th=[ 118], 40.00th=[ 121], 50.00th=[ 124], 60.00th=[ 127], 00:11:45.155 | 70.00th=[ 131], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 147], 00:11:45.155 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 176], 99.95th=[ 180], 00:11:45.155 | 99.99th=[ 247] 00:11:45.155 bw ( KiB/s): min=12288, max=12288, per=31.60%, avg=12288.00, stdev= 0.00, samples=1 00:11:45.155 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:45.155 lat (usec) : 100=1.08%, 250=98.62%, 500=0.24%, 750=0.03%, 1000=0.02% 00:11:45.155 lat (msec) : 2=0.02% 00:11:45.155 cpu : usr=2.30%, sys=10.80%, ctx=5849, majf=0, minf=19 00:11:45.155 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:45.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.155 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.155 issued rwts: total=2777,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.155 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:45.155 job1: (groupid=0, jobs=1): err= 0: pid=75571: Wed Nov 20 15:02:15 2024 00:11:45.155 read: IOPS=1898, BW=7592KiB/s (7775kB/s)(7600KiB/1001msec) 00:11:45.155 slat (nsec): min=11776, max=72705, avg=22020.90, stdev=7836.23 00:11:45.155 clat (usec): min=167, max=662, avg=273.24, stdev=49.69 00:11:45.155 lat (usec): min=182, max=682, avg=295.26, stdev=50.37 00:11:45.155 clat percentiles (usec): 00:11:45.155 | 1.00th=[ 186], 5.00th=[ 229], 10.00th=[ 237], 20.00th=[ 245], 00:11:45.155 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:11:45.155 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 318], 95.00th=[ 355], 00:11:45.155 | 99.00th=[ 494], 99.50th=[ 519], 99.90th=[ 644], 99.95th=[ 660], 00:11:45.155 | 99.99th=[ 660] 00:11:45.155 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:45.155 slat (usec): min=17, max=344, avg=28.07, stdev=12.13 00:11:45.155 clat (usec): min=93, max=3891, avg=181.73, stdev=94.91 00:11:45.155 lat (usec): min=112, max=3923, avg=209.79, stdev=96.14 00:11:45.155 clat percentiles (usec): 00:11:45.155 | 1.00th=[ 104], 5.00th=[ 114], 10.00th=[ 121], 20.00th=[ 139], 00:11:45.155 | 30.00th=[ 151], 40.00th=[ 169], 50.00th=[ 184], 60.00th=[ 194], 00:11:45.155 | 70.00th=[ 202], 80.00th=[ 212], 90.00th=[ 229], 95.00th=[ 245], 00:11:45.155 | 99.00th=[ 293], 99.50th=[ 334], 99.90th=[ 660], 99.95th=[ 1012], 00:11:45.155 | 99.99th=[ 3884] 00:11:45.155 bw ( KiB/s): min= 8192, max= 8192, per=21.07%, avg=8192.00, stdev= 0.00, samples=1 00:11:45.155 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:45.155 lat (usec) : 100=0.25%, 250=62.77%, 500=36.47%, 750=0.46% 00:11:45.155 lat (msec) : 2=0.03%, 4=0.03% 00:11:45.155 cpu : usr=1.70%, sys=8.20%, ctx=3951, majf=0, minf=11 00:11:45.155 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:45.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.155 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.155 issued rwts: total=1900,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.155 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:45.155 job2: (groupid=0, jobs=1): err= 0: pid=75572: Wed Nov 20 15:02:15 2024 00:11:45.155 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:45.155 slat (nsec): min=11963, max=56448, avg=24681.78, stdev=4021.83 00:11:45.155 clat (usec): min=159, max=969, avg=313.85, stdev=70.76 00:11:45.155 lat (usec): min=184, max=989, avg=338.53, stdev=71.92 00:11:45.155 clat percentiles (usec): 00:11:45.155 | 1.00th=[ 190], 5.00th=[ 243], 10.00th=[ 251], 20.00th=[ 265], 00:11:45.155 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 302], 00:11:45.155 | 70.00th=[ 347], 80.00th=[ 371], 90.00th=[ 408], 95.00th=[ 465], 00:11:45.155 | 99.00th=[ 510], 99.50th=[ 545], 99.90th=[ 701], 99.95th=[ 971], 00:11:45.155 | 99.99th=[ 971] 00:11:45.155 write: IOPS=1727, BW=6909KiB/s (7075kB/s)(6916KiB/1001msec); 0 zone resets 00:11:45.155 slat (nsec): min=16303, max=99185, avg=34880.24, stdev=8628.69 00:11:45.155 clat (usec): min=109, max=638, avg=237.33, stdev=56.07 00:11:45.155 lat (usec): min=135, max=671, avg=272.21, stdev=60.26 00:11:45.155 clat percentiles (usec): 00:11:45.155 | 1.00th=[ 124], 5.00th=[ 145], 10.00th=[ 180], 20.00th=[ 206], 00:11:45.155 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 237], 00:11:45.155 | 70.00th=[ 245], 80.00th=[ 265], 90.00th=[ 322], 95.00th=[ 343], 00:11:45.155 | 99.00th=[ 383], 99.50th=[ 412], 99.90th=[ 594], 99.95th=[ 635], 00:11:45.155 | 99.99th=[ 635] 00:11:45.155 bw ( KiB/s): min= 8192, max= 8192, per=21.07%, avg=8192.00, stdev= 0.00, samples=1 00:11:45.155 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:45.155 lat (usec) : 250=43.19%, 500=55.99%, 750=0.80%, 1000=0.03% 00:11:45.155 cpu : usr=1.40%, sys=8.60%, ctx=3265, majf=0, minf=11 00:11:45.155 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:45.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.155 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.155 issued rwts: total=1536,1729,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.155 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:45.155 job3: (groupid=0, jobs=1): err= 0: pid=75573: Wed Nov 20 15:02:15 2024 00:11:45.155 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:11:45.155 slat (nsec): min=11138, max=53772, avg=15646.64, stdev=5136.04 00:11:45.155 clat (usec): min=146, max=284, avg=184.02, stdev=15.77 00:11:45.155 lat (usec): min=158, max=298, avg=199.66, stdev=17.41 00:11:45.155 clat percentiles (usec): 00:11:45.155 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:11:45.155 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:11:45.155 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 212], 00:11:45.155 | 99.00th=[ 227], 99.50th=[ 237], 99.90th=[ 265], 99.95th=[ 273], 00:11:45.155 | 99.99th=[ 285] 00:11:45.155 write: IOPS=2878, BW=11.2MiB/s (11.8MB/s)(11.3MiB/1001msec); 0 zone resets 00:11:45.155 slat (nsec): min=14574, max=86632, avg=24391.94, stdev=7790.94 00:11:45.155 clat (usec): min=105, max=2988, avg=141.84, stdev=56.81 00:11:45.155 lat (usec): min=123, max=3013, avg=166.23, stdev=57.46 00:11:45.155 clat percentiles (usec): 00:11:45.155 | 1.00th=[ 115], 5.00th=[ 121], 10.00th=[ 125], 20.00th=[ 129], 00:11:45.155 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 143], 00:11:45.155 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 165], 00:11:45.155 | 99.00th=[ 184], 99.50th=[ 188], 99.90th=[ 233], 99.95th=[ 922], 00:11:45.155 | 99.99th=[ 2999] 00:11:45.155 bw ( KiB/s): min=12288, max=12288, per=31.60%, avg=12288.00, stdev= 0.00, samples=1 00:11:45.155 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:45.155 lat (usec) : 250=99.83%, 500=0.13%, 1000=0.02% 00:11:45.155 lat (msec) : 4=0.02% 00:11:45.155 cpu : usr=3.10%, sys=7.90%, ctx=5441, majf=0, minf=5 00:11:45.155 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:45.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.155 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.155 issued rwts: total=2560,2881,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.155 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:45.155 00:11:45.155 Run status group 0 (all jobs): 00:11:45.155 READ: bw=34.2MiB/s (35.9MB/s), 6138KiB/s-10.8MiB/s (6285kB/s-11.4MB/s), io=34.3MiB (35.9MB), run=1001-1001msec 00:11:45.155 WRITE: bw=38.0MiB/s (39.8MB/s), 6909KiB/s-12.0MiB/s (7075kB/s-12.6MB/s), io=38.0MiB (39.9MB), run=1001-1001msec 00:11:45.155 00:11:45.155 Disk stats (read/write): 00:11:45.155 nvme0n1: ios=2463/2560, merge=0/0, ticks=467/352, in_queue=819, util=88.68% 00:11:45.155 nvme0n2: ios=1585/1889, merge=0/0, ticks=465/360, in_queue=825, util=88.89% 00:11:45.414 nvme0n3: ios=1289/1536, merge=0/0, ticks=419/388, in_queue=807, util=88.63% 00:11:45.414 nvme0n4: ios=2121/2560, merge=0/0, ticks=392/394, in_queue=786, util=89.82% 00:11:45.414 15:02:15 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:45.414 [global] 00:11:45.414 thread=1 00:11:45.414 invalidate=1 00:11:45.414 rw=write 00:11:45.414 time_based=1 00:11:45.414 runtime=1 00:11:45.414 ioengine=libaio 00:11:45.414 direct=1 00:11:45.414 bs=4096 00:11:45.414 iodepth=128 00:11:45.414 norandommap=0 00:11:45.414 numjobs=1 00:11:45.414 00:11:45.414 verify_dump=1 00:11:45.414 verify_backlog=512 00:11:45.414 verify_state_save=0 00:11:45.414 do_verify=1 00:11:45.414 verify=crc32c-intel 00:11:45.414 [job0] 00:11:45.414 filename=/dev/nvme0n1 00:11:45.414 [job1] 00:11:45.414 filename=/dev/nvme0n2 00:11:45.414 [job2] 00:11:45.414 filename=/dev/nvme0n3 00:11:45.414 [job3] 00:11:45.414 filename=/dev/nvme0n4 00:11:45.414 Could not set queue depth (nvme0n1) 00:11:45.414 Could not set queue depth (nvme0n2) 00:11:45.414 Could not set queue depth (nvme0n3) 00:11:45.414 Could not set queue depth (nvme0n4) 00:11:45.414 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:45.414 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:45.414 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:45.414 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:45.414 fio-3.35 00:11:45.414 Starting 4 threads 00:11:46.789 00:11:46.789 job0: (groupid=0, jobs=1): err= 0: pid=75626: Wed Nov 20 15:02:17 2024 00:11:46.789 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:11:46.789 slat (usec): min=6, max=3407, avg=79.68, stdev=368.08 00:11:46.789 clat (usec): min=7879, max=12219, avg=10744.55, stdev=550.94 00:11:46.789 lat (usec): min=8708, max=13109, avg=10824.23, stdev=424.91 00:11:46.789 clat percentiles (usec): 00:11:46.789 | 1.00th=[ 8586], 5.00th=[10159], 10.00th=[10290], 20.00th=[10421], 00:11:46.789 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10683], 60.00th=[10814], 00:11:46.789 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11338], 95.00th=[11600], 00:11:46.789 | 99.00th=[11994], 99.50th=[12125], 99.90th=[12256], 99.95th=[12256], 00:11:46.789 | 99.99th=[12256] 00:11:46.789 write: IOPS=6074, BW=23.7MiB/s (24.9MB/s)(23.8MiB/1001msec); 0 zone resets 00:11:46.789 slat (usec): min=9, max=2928, avg=83.13, stdev=336.29 00:11:46.789 clat (usec): min=142, max=13827, avg=10855.13, stdev=1177.67 00:11:46.789 lat (usec): min=1700, max=13848, avg=10938.26, stdev=1138.09 00:11:46.789 clat percentiles (usec): 00:11:46.789 | 1.00th=[ 5407], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10421], 00:11:46.789 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[10814], 00:11:46.789 | 70.00th=[11076], 80.00th=[11338], 90.00th=[12125], 95.00th=[12911], 00:11:46.789 | 99.00th=[13566], 99.50th=[13698], 99.90th=[13829], 99.95th=[13829], 00:11:46.789 | 99.99th=[13829] 00:11:46.789 bw ( KiB/s): min=23048, max=24625, per=35.33%, avg=23836.50, stdev=1115.11, samples=2 00:11:46.789 iops : min= 5762, max= 6156, avg=5959.00, stdev=278.60, samples=2 00:11:46.789 lat (usec) : 250=0.01% 00:11:46.789 lat (msec) : 2=0.09%, 4=0.19%, 10=4.87%, 20=94.84% 00:11:46.789 cpu : usr=4.80%, sys=16.90%, ctx=379, majf=0, minf=8 00:11:46.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:46.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:46.789 issued rwts: total=5632,6081,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:46.789 job1: (groupid=0, jobs=1): err= 0: pid=75627: Wed Nov 20 15:02:17 2024 00:11:46.789 read: IOPS=2517, BW=9.83MiB/s (10.3MB/s)(10.0MiB/1017msec) 00:11:46.789 slat (usec): min=4, max=12334, avg=176.77, stdev=802.88 00:11:46.789 clat (usec): min=11888, max=33988, avg=22498.32, stdev=2919.82 00:11:46.789 lat (usec): min=11996, max=36711, avg=22675.09, stdev=2988.54 00:11:46.789 clat percentiles (usec): 00:11:46.789 | 1.00th=[13829], 5.00th=[17957], 10.00th=[19792], 20.00th=[20317], 00:11:46.789 | 30.00th=[21365], 40.00th=[22152], 50.00th=[22676], 60.00th=[22938], 00:11:46.789 | 70.00th=[23200], 80.00th=[23725], 90.00th=[25560], 95.00th=[27132], 00:11:46.789 | 99.00th=[32113], 99.50th=[32637], 99.90th=[33162], 99.95th=[33424], 00:11:46.789 | 99.99th=[33817] 00:11:46.789 write: IOPS=2955, BW=11.5MiB/s (12.1MB/s)(11.7MiB/1017msec); 0 zone resets 00:11:46.789 slat (usec): min=4, max=8842, avg=175.00, stdev=793.50 00:11:46.789 clat (usec): min=7743, max=44103, avg=23434.48, stdev=3984.06 00:11:46.789 lat (usec): min=7765, max=44208, avg=23609.48, stdev=3986.48 00:11:46.789 clat percentiles (usec): 00:11:46.789 | 1.00th=[14091], 5.00th=[17171], 10.00th=[19530], 20.00th=[21627], 00:11:46.789 | 30.00th=[22414], 40.00th=[22676], 50.00th=[23200], 60.00th=[23462], 00:11:46.789 | 70.00th=[24249], 80.00th=[25560], 90.00th=[26608], 95.00th=[29230], 00:11:46.789 | 99.00th=[40633], 99.50th=[41157], 99.90th=[42730], 99.95th=[43254], 00:11:46.789 | 99.99th=[44303] 00:11:46.789 bw ( KiB/s): min=10744, max=12288, per=17.07%, avg=11516.00, stdev=1091.77, samples=2 00:11:46.789 iops : min= 2686, max= 3072, avg=2879.00, stdev=272.94, samples=2 00:11:46.789 lat (msec) : 10=0.11%, 20=10.76%, 50=89.13% 00:11:46.789 cpu : usr=2.36%, sys=7.87%, ctx=636, majf=0, minf=15 00:11:46.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:46.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:46.789 issued rwts: total=2560,3006,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:46.789 job2: (groupid=0, jobs=1): err= 0: pid=75629: Wed Nov 20 15:02:17 2024 00:11:46.789 read: IOPS=5009, BW=19.6MiB/s (20.5MB/s)(19.6MiB/1003msec) 00:11:46.789 slat (usec): min=6, max=3848, avg=93.53, stdev=437.49 00:11:46.789 clat (usec): min=237, max=15651, avg=12337.74, stdev=1195.49 00:11:46.789 lat (usec): min=2813, max=15665, avg=12431.26, stdev=1118.88 00:11:46.789 clat percentiles (usec): 00:11:46.789 | 1.00th=[ 6259], 5.00th=[11469], 10.00th=[11731], 20.00th=[11994], 00:11:46.789 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:11:46.789 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13304], 95.00th=[13698], 00:11:46.789 | 99.00th=[14877], 99.50th=[15533], 99.90th=[15664], 99.95th=[15664], 00:11:46.789 | 99.99th=[15664] 00:11:46.789 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:11:46.789 slat (usec): min=9, max=3178, avg=95.48, stdev=394.51 00:11:46.789 clat (usec): min=9062, max=15537, avg=12612.26, stdev=840.05 00:11:46.789 lat (usec): min=10682, max=15566, avg=12707.73, stdev=751.85 00:11:46.789 clat percentiles (usec): 00:11:46.789 | 1.00th=[10159], 5.00th=[11469], 10.00th=[11731], 20.00th=[11994], 00:11:46.789 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12780], 00:11:46.789 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13698], 95.00th=[14222], 00:11:46.789 | 99.00th=[15139], 99.50th=[15401], 99.90th=[15533], 99.95th=[15533], 00:11:46.789 | 99.99th=[15533] 00:11:46.789 bw ( KiB/s): min=20480, max=20521, per=30.38%, avg=20500.50, stdev=28.99, samples=2 00:11:46.789 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:11:46.789 lat (usec) : 250=0.01% 00:11:46.789 lat (msec) : 4=0.32%, 10=1.79%, 20=97.88% 00:11:46.789 cpu : usr=4.69%, sys=15.17%, ctx=325, majf=0, minf=7 00:11:46.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:46.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:46.789 issued rwts: total=5025,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:46.789 job3: (groupid=0, jobs=1): err= 0: pid=75630: Wed Nov 20 15:02:17 2024 00:11:46.789 read: IOPS=2512, BW=9.81MiB/s (10.3MB/s)(10.0MiB/1019msec) 00:11:46.789 slat (usec): min=4, max=10107, avg=178.64, stdev=816.44 00:11:46.789 clat (usec): min=11970, max=34308, avg=22657.44, stdev=2885.42 00:11:46.789 lat (usec): min=12639, max=35057, avg=22836.08, stdev=2941.35 00:11:46.789 clat percentiles (usec): 00:11:46.789 | 1.00th=[14091], 5.00th=[16450], 10.00th=[19268], 20.00th=[21365], 00:11:46.789 | 30.00th=[22152], 40.00th=[22414], 50.00th=[22938], 60.00th=[23200], 00:11:46.789 | 70.00th=[23462], 80.00th=[23987], 90.00th=[25822], 95.00th=[26870], 00:11:46.789 | 99.00th=[31065], 99.50th=[31589], 99.90th=[32900], 99.95th=[33817], 00:11:46.789 | 99.99th=[34341] 00:11:46.789 write: IOPS=2925, BW=11.4MiB/s (12.0MB/s)(11.6MiB/1019msec); 0 zone resets 00:11:46.789 slat (usec): min=4, max=9454, avg=173.87, stdev=782.10 00:11:46.789 clat (usec): min=8695, max=46393, avg=23658.12, stdev=3863.99 00:11:46.789 lat (usec): min=8711, max=46402, avg=23831.99, stdev=3860.86 00:11:46.789 clat percentiles (usec): 00:11:46.789 | 1.00th=[13829], 5.00th=[18744], 10.00th=[20841], 20.00th=[21890], 00:11:46.789 | 30.00th=[22414], 40.00th=[22676], 50.00th=[22938], 60.00th=[23462], 00:11:46.789 | 70.00th=[24511], 80.00th=[25035], 90.00th=[26346], 95.00th=[30540], 00:11:46.789 | 99.00th=[43254], 99.50th=[44827], 99.90th=[46400], 99.95th=[46400], 00:11:46.789 | 99.99th=[46400] 00:11:46.789 bw ( KiB/s): min=10544, max=12288, per=16.92%, avg=11416.00, stdev=1233.19, samples=2 00:11:46.789 iops : min= 2636, max= 3072, avg=2854.00, stdev=308.30, samples=2 00:11:46.789 lat (msec) : 10=0.13%, 20=10.11%, 50=89.77% 00:11:46.789 cpu : usr=2.55%, sys=7.86%, ctx=648, majf=0, minf=11 00:11:46.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:46.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:46.789 issued rwts: total=2560,2981,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:46.789 00:11:46.789 Run status group 0 (all jobs): 00:11:46.789 READ: bw=60.5MiB/s (63.4MB/s), 9.81MiB/s-22.0MiB/s (10.3MB/s-23.0MB/s), io=61.6MiB (64.6MB), run=1001-1019msec 00:11:46.789 WRITE: bw=65.9MiB/s (69.1MB/s), 11.4MiB/s-23.7MiB/s (12.0MB/s-24.9MB/s), io=67.1MiB (70.4MB), run=1001-1019msec 00:11:46.789 00:11:46.789 Disk stats (read/write): 00:11:46.789 nvme0n1: ios=5010/5120, merge=0/0, ticks=11651/11767, in_queue=23418, util=88.68% 00:11:46.789 nvme0n2: ios=2230/2560, merge=0/0, ticks=23815/26651, in_queue=50466, util=88.28% 00:11:46.789 nvme0n3: ios=4160/4608, merge=0/0, ticks=11381/12300, in_queue=23681, util=89.20% 00:11:46.789 nvme0n4: ios=2171/2560, merge=0/0, ticks=23867/27158, in_queue=51025, util=89.13% 00:11:46.789 15:02:17 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:46.789 [global] 00:11:46.789 thread=1 00:11:46.789 invalidate=1 00:11:46.789 rw=randwrite 00:11:46.789 time_based=1 00:11:46.790 runtime=1 00:11:46.790 ioengine=libaio 00:11:46.790 direct=1 00:11:46.790 bs=4096 00:11:46.790 iodepth=128 00:11:46.790 norandommap=0 00:11:46.790 numjobs=1 00:11:46.790 00:11:46.790 verify_dump=1 00:11:46.790 verify_backlog=512 00:11:46.790 verify_state_save=0 00:11:46.790 do_verify=1 00:11:46.790 verify=crc32c-intel 00:11:46.790 [job0] 00:11:46.790 filename=/dev/nvme0n1 00:11:46.790 [job1] 00:11:46.790 filename=/dev/nvme0n2 00:11:46.790 [job2] 00:11:46.790 filename=/dev/nvme0n3 00:11:46.790 [job3] 00:11:46.790 filename=/dev/nvme0n4 00:11:46.790 Could not set queue depth (nvme0n1) 00:11:46.790 Could not set queue depth (nvme0n2) 00:11:46.790 Could not set queue depth (nvme0n3) 00:11:46.790 Could not set queue depth (nvme0n4) 00:11:46.790 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:46.790 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:46.790 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:46.790 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:46.790 fio-3.35 00:11:46.790 Starting 4 threads 00:11:48.166 00:11:48.166 job0: (groupid=0, jobs=1): err= 0: pid=75685: Wed Nov 20 15:02:18 2024 00:11:48.166 read: IOPS=5101, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:11:48.166 slat (usec): min=5, max=29176, avg=98.53, stdev=745.26 00:11:48.166 clat (usec): min=1364, max=59196, avg=13394.79, stdev=6491.48 00:11:48.166 lat (usec): min=4445, max=59234, avg=13493.32, stdev=6536.04 00:11:48.166 clat percentiles (usec): 00:11:48.166 | 1.00th=[ 5800], 5.00th=[ 9110], 10.00th=[10290], 20.00th=[10683], 00:11:48.166 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11863], 00:11:48.166 | 70.00th=[13173], 80.00th=[14091], 90.00th=[15795], 95.00th=[30278], 00:11:48.166 | 99.00th=[46924], 99.50th=[49546], 99.90th=[50594], 99.95th=[50594], 00:11:48.166 | 99.99th=[58983] 00:11:48.166 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:11:48.166 slat (usec): min=4, max=10348, avg=89.19, stdev=524.83 00:11:48.166 clat (usec): min=2945, max=40856, avg=11441.32, stdev=2107.26 00:11:48.166 lat (usec): min=2967, max=40868, avg=11530.51, stdev=2066.37 00:11:48.166 clat percentiles (usec): 00:11:48.166 | 1.00th=[ 4424], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[10290], 00:11:48.166 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11076], 60.00th=[11863], 00:11:48.166 | 70.00th=[12256], 80.00th=[13173], 90.00th=[13698], 95.00th=[14222], 00:11:48.166 | 99.00th=[18744], 99.50th=[18744], 99.90th=[19006], 99.95th=[19006], 00:11:48.166 | 99.99th=[40633] 00:11:48.166 bw ( KiB/s): min=19464, max=21539, per=31.01%, avg=20501.50, stdev=1467.25, samples=2 00:11:48.166 iops : min= 4866, max= 5384, avg=5125.00, stdev=366.28, samples=2 00:11:48.166 lat (msec) : 2=0.01%, 4=0.27%, 10=10.20%, 20=85.75%, 50=3.66% 00:11:48.166 lat (msec) : 100=0.10% 00:11:48.166 cpu : usr=4.90%, sys=13.19%, ctx=268, majf=0, minf=11 00:11:48.166 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:48.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.166 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:48.166 issued rwts: total=5112,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.166 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:48.166 job1: (groupid=0, jobs=1): err= 0: pid=75686: Wed Nov 20 15:02:18 2024 00:11:48.166 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:11:48.166 slat (usec): min=3, max=8471, avg=171.60, stdev=771.23 00:11:48.166 clat (usec): min=7290, max=39053, avg=22116.61, stdev=6270.65 00:11:48.166 lat (usec): min=7303, max=39081, avg=22288.22, stdev=6312.43 00:11:48.166 clat percentiles (usec): 00:11:48.166 | 1.00th=[ 7373], 5.00th=[11994], 10.00th=[12780], 20.00th=[17695], 00:11:48.166 | 30.00th=[20317], 40.00th=[21365], 50.00th=[22414], 60.00th=[23200], 00:11:48.166 | 70.00th=[24511], 80.00th=[26608], 90.00th=[30016], 95.00th=[33162], 00:11:48.166 | 99.00th=[38536], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:11:48.166 | 99.99th=[39060] 00:11:48.166 write: IOPS=3175, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1003msec); 0 zone resets 00:11:48.166 slat (usec): min=4, max=13969, avg=141.00, stdev=768.68 00:11:48.166 clat (usec): min=2720, max=35256, avg=18591.74, stdev=5669.45 00:11:48.166 lat (usec): min=2739, max=35287, avg=18732.74, stdev=5705.90 00:11:48.166 clat percentiles (usec): 00:11:48.166 | 1.00th=[ 7046], 5.00th=[11600], 10.00th=[11863], 20.00th=[12780], 00:11:48.166 | 30.00th=[14353], 40.00th=[16450], 50.00th=[17695], 60.00th=[20579], 00:11:48.166 | 70.00th=[23200], 80.00th=[24511], 90.00th=[25297], 95.00th=[26870], 00:11:48.166 | 99.00th=[30278], 99.50th=[30540], 99.90th=[34341], 99.95th=[34341], 00:11:48.166 | 99.99th=[35390] 00:11:48.166 bw ( KiB/s): min=10424, max=14208, per=18.63%, avg=12316.00, stdev=2675.69, samples=2 00:11:48.166 iops : min= 2606, max= 3552, avg=3079.00, stdev=668.92, samples=2 00:11:48.166 lat (msec) : 4=0.32%, 10=3.68%, 20=40.13%, 50=55.87% 00:11:48.166 cpu : usr=2.00%, sys=8.58%, ctx=656, majf=0, minf=12 00:11:48.166 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:48.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.166 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:48.166 issued rwts: total=3072,3185,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.166 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:48.166 job2: (groupid=0, jobs=1): err= 0: pid=75687: Wed Nov 20 15:02:18 2024 00:11:48.166 read: IOPS=2886, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1002msec) 00:11:48.166 slat (usec): min=3, max=8145, avg=174.42, stdev=773.25 00:11:48.166 clat (usec): min=1795, max=37366, avg=22131.81, stdev=5484.03 00:11:48.167 lat (usec): min=3645, max=37386, avg=22306.22, stdev=5515.79 00:11:48.167 clat percentiles (usec): 00:11:48.167 | 1.00th=[ 8848], 5.00th=[13435], 10.00th=[14353], 20.00th=[16909], 00:11:48.167 | 30.00th=[20317], 40.00th=[21365], 50.00th=[22676], 60.00th=[23462], 00:11:48.167 | 70.00th=[24249], 80.00th=[26346], 90.00th=[29492], 95.00th=[31065], 00:11:48.167 | 99.00th=[33817], 99.50th=[34866], 99.90th=[37487], 99.95th=[37487], 00:11:48.167 | 99.99th=[37487] 00:11:48.167 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:11:48.167 slat (usec): min=4, max=12620, avg=153.78, stdev=717.47 00:11:48.167 clat (usec): min=7042, max=31844, avg=20497.62, stdev=4625.84 00:11:48.167 lat (usec): min=10033, max=31885, avg=20651.40, stdev=4637.39 00:11:48.167 clat percentiles (usec): 00:11:48.167 | 1.00th=[12911], 5.00th=[13435], 10.00th=[14222], 20.00th=[14877], 00:11:48.167 | 30.00th=[17171], 40.00th=[19530], 50.00th=[20841], 60.00th=[22414], 00:11:48.167 | 70.00th=[23725], 80.00th=[24773], 90.00th=[25560], 95.00th=[27919], 00:11:48.167 | 99.00th=[30540], 99.50th=[31065], 99.90th=[31589], 99.95th=[31589], 00:11:48.167 | 99.99th=[31851] 00:11:48.167 bw ( KiB/s): min=11504, max=13045, per=18.56%, avg=12274.50, stdev=1089.65, samples=2 00:11:48.167 iops : min= 2876, max= 3261, avg=3068.50, stdev=272.24, samples=2 00:11:48.167 lat (msec) : 2=0.02%, 4=0.22%, 10=0.74%, 20=34.73%, 50=64.30% 00:11:48.167 cpu : usr=2.59%, sys=8.28%, ctx=638, majf=0, minf=13 00:11:48.167 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:11:48.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:48.167 issued rwts: total=2892,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.167 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:48.167 job3: (groupid=0, jobs=1): err= 0: pid=75688: Wed Nov 20 15:02:18 2024 00:11:48.167 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:11:48.167 slat (usec): min=6, max=10521, avg=92.56, stdev=538.05 00:11:48.167 clat (usec): min=7761, max=28013, avg=12657.85, stdev=1463.18 00:11:48.167 lat (usec): min=7773, max=28025, avg=12750.41, stdev=1506.34 00:11:48.167 clat percentiles (usec): 00:11:48.167 | 1.00th=[ 9110], 5.00th=[10945], 10.00th=[11338], 20.00th=[11863], 00:11:48.167 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:11:48.167 | 70.00th=[12911], 80.00th=[13304], 90.00th=[14222], 95.00th=[14746], 00:11:48.167 | 99.00th=[18220], 99.50th=[18482], 99.90th=[22676], 99.95th=[25822], 00:11:48.167 | 99.99th=[27919] 00:11:48.167 write: IOPS=5198, BW=20.3MiB/s (21.3MB/s)(20.4MiB/1004msec); 0 zone resets 00:11:48.167 slat (usec): min=6, max=9931, avg=93.68, stdev=552.27 00:11:48.167 clat (usec): min=3612, max=18589, avg=11956.58, stdev=1663.70 00:11:48.167 lat (usec): min=3629, max=18809, avg=12050.27, stdev=1597.18 00:11:48.167 clat percentiles (usec): 00:11:48.167 | 1.00th=[ 5014], 5.00th=[ 9241], 10.00th=[10945], 20.00th=[11469], 00:11:48.167 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:11:48.167 | 70.00th=[12387], 80.00th=[12780], 90.00th=[13304], 95.00th=[14091], 00:11:48.167 | 99.00th=[17957], 99.50th=[17957], 99.90th=[18220], 99.95th=[18220], 00:11:48.167 | 99.99th=[18482] 00:11:48.167 bw ( KiB/s): min=20439, max=20488, per=30.95%, avg=20463.50, stdev=34.65, samples=2 00:11:48.167 iops : min= 5109, max= 5122, avg=5115.50, stdev= 9.19, samples=2 00:11:48.167 lat (msec) : 4=0.16%, 10=5.05%, 20=94.66%, 50=0.13% 00:11:48.167 cpu : usr=4.29%, sys=13.46%, ctx=252, majf=0, minf=13 00:11:48.167 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:48.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:48.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:48.167 issued rwts: total=5120,5219,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:48.167 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:48.167 00:11:48.167 Run status group 0 (all jobs): 00:11:48.167 READ: bw=63.0MiB/s (66.1MB/s), 11.3MiB/s-19.9MiB/s (11.8MB/s-20.9MB/s), io=63.3MiB (66.3MB), run=1002-1004msec 00:11:48.167 WRITE: bw=64.6MiB/s (67.7MB/s), 12.0MiB/s-20.3MiB/s (12.6MB/s-21.3MB/s), io=64.8MiB (68.0MB), run=1002-1004msec 00:11:48.167 00:11:48.167 Disk stats (read/write): 00:11:48.167 nvme0n1: ios=4608/4610, merge=0/0, ticks=51257/48990, in_queue=100247, util=88.47% 00:11:48.167 nvme0n2: ios=2587/2769, merge=0/0, ticks=31076/27560, in_queue=58636, util=87.07% 00:11:48.167 nvme0n3: ios=2498/2560, merge=0/0, ticks=30252/29011, in_queue=59263, util=88.73% 00:11:48.167 nvme0n4: ios=4096/4468, merge=0/0, ticks=49952/50135, in_queue=100087, util=89.68% 00:11:48.167 15:02:18 -- target/fio.sh@55 -- # sync 00:11:48.167 15:02:18 -- target/fio.sh@59 -- # fio_pid=75707 00:11:48.167 15:02:18 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:48.167 15:02:18 -- target/fio.sh@61 -- # sleep 3 00:11:48.167 [global] 00:11:48.167 thread=1 00:11:48.167 invalidate=1 00:11:48.167 rw=read 00:11:48.167 time_based=1 00:11:48.167 runtime=10 00:11:48.167 ioengine=libaio 00:11:48.167 direct=1 00:11:48.167 bs=4096 00:11:48.167 iodepth=1 00:11:48.167 norandommap=1 00:11:48.167 numjobs=1 00:11:48.167 00:11:48.167 [job0] 00:11:48.167 filename=/dev/nvme0n1 00:11:48.167 [job1] 00:11:48.167 filename=/dev/nvme0n2 00:11:48.167 [job2] 00:11:48.167 filename=/dev/nvme0n3 00:11:48.167 [job3] 00:11:48.167 filename=/dev/nvme0n4 00:11:48.167 Could not set queue depth (nvme0n1) 00:11:48.167 Could not set queue depth (nvme0n2) 00:11:48.167 Could not set queue depth (nvme0n3) 00:11:48.167 Could not set queue depth (nvme0n4) 00:11:48.167 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:48.167 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:48.167 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:48.167 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:48.167 fio-3.35 00:11:48.167 Starting 4 threads 00:11:51.452 15:02:21 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:51.452 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=52772864, buflen=4096 00:11:51.452 fio: pid=75755, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:51.452 15:02:22 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:51.711 fio: pid=75754, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:51.711 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=70819840, buflen=4096 00:11:51.711 15:02:22 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:51.711 15:02:22 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:51.970 fio: pid=75752, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:51.970 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=66301952, buflen=4096 00:11:51.970 15:02:22 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:51.970 15:02:22 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:52.229 fio: pid=75753, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:52.229 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=17321984, buflen=4096 00:11:52.487 00:11:52.487 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75752: Wed Nov 20 15:02:23 2024 00:11:52.487 read: IOPS=4449, BW=17.4MiB/s (18.2MB/s)(63.2MiB/3638msec) 00:11:52.487 slat (usec): min=8, max=15376, avg=15.80, stdev=170.48 00:11:52.487 clat (usec): min=118, max=3629, avg=207.81, stdev=66.63 00:11:52.487 lat (usec): min=140, max=15574, avg=223.61, stdev=182.99 00:11:52.487 clat percentiles (usec): 00:11:52.487 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 159], 00:11:52.487 | 30.00th=[ 167], 40.00th=[ 178], 50.00th=[ 221], 60.00th=[ 233], 00:11:52.487 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 260], 95.00th=[ 269], 00:11:52.487 | 99.00th=[ 297], 99.50th=[ 326], 99.90th=[ 449], 99.95th=[ 1303], 00:11:52.487 | 99.99th=[ 2835] 00:11:52.487 bw ( KiB/s): min=15456, max=21840, per=25.94%, avg=17752.71, stdev=2912.90, samples=7 00:11:52.487 iops : min= 3864, max= 5460, avg=4438.14, stdev=728.21, samples=7 00:11:52.487 lat (usec) : 250=81.33%, 500=18.57%, 750=0.02%, 1000=0.01% 00:11:52.487 lat (msec) : 2=0.04%, 4=0.02% 00:11:52.487 cpu : usr=1.07%, sys=5.61%, ctx=16196, majf=0, minf=1 00:11:52.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:52.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.488 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.488 issued rwts: total=16188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:52.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:52.488 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75753: Wed Nov 20 15:02:23 2024 00:11:52.488 read: IOPS=5266, BW=20.6MiB/s (21.6MB/s)(80.5MiB/3914msec) 00:11:52.488 slat (usec): min=8, max=15185, avg=17.60, stdev=168.61 00:11:52.488 clat (usec): min=105, max=3589, avg=170.84, stdev=42.41 00:11:52.488 lat (usec): min=136, max=15353, avg=188.44, stdev=176.27 00:11:52.488 clat percentiles (usec): 00:11:52.488 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 153], 00:11:52.488 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:11:52.488 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 200], 95.00th=[ 223], 00:11:52.488 | 99.00th=[ 253], 99.50th=[ 273], 99.90th=[ 400], 99.95th=[ 766], 00:11:52.488 | 99.99th=[ 1795] 00:11:52.488 bw ( KiB/s): min=18949, max=21928, per=30.71%, avg=21019.00, stdev=960.31, samples=7 00:11:52.488 iops : min= 4737, max= 5482, avg=5254.71, stdev=240.17, samples=7 00:11:52.488 lat (usec) : 250=98.86%, 500=1.08%, 750=0.01%, 1000=0.02% 00:11:52.488 lat (msec) : 2=0.02%, 4=0.01% 00:11:52.488 cpu : usr=1.69%, sys=6.77%, ctx=20637, majf=0, minf=2 00:11:52.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:52.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.488 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.488 issued rwts: total=20614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:52.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:52.488 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75754: Wed Nov 20 15:02:23 2024 00:11:52.488 read: IOPS=5214, BW=20.4MiB/s (21.4MB/s)(67.5MiB/3316msec) 00:11:52.488 slat (usec): min=8, max=13105, avg=14.46, stdev=112.03 00:11:52.488 clat (usec): min=59, max=2524, avg=175.99, stdev=33.73 00:11:52.488 lat (usec): min=149, max=13369, avg=190.44, stdev=117.78 00:11:52.488 clat percentiles (usec): 00:11:52.488 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:11:52.488 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:11:52.488 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 202], 95.00th=[ 225], 00:11:52.488 | 99.00th=[ 251], 99.50th=[ 269], 99.90th=[ 343], 99.95th=[ 553], 00:11:52.488 | 99.99th=[ 1844] 00:11:52.488 bw ( KiB/s): min=21152, max=21592, per=31.34%, avg=21450.67, stdev=159.49, samples=6 00:11:52.488 iops : min= 5288, max= 5398, avg=5362.67, stdev=39.87, samples=6 00:11:52.488 lat (usec) : 100=0.01%, 250=98.87%, 500=1.06%, 750=0.02%, 1000=0.01% 00:11:52.488 lat (msec) : 2=0.02%, 4=0.01% 00:11:52.488 cpu : usr=1.39%, sys=6.30%, ctx=17297, majf=0, minf=2 00:11:52.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:52.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.488 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.488 issued rwts: total=17291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:52.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:52.488 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=75755: Wed Nov 20 15:02:23 2024 00:11:52.488 read: IOPS=4273, BW=16.7MiB/s (17.5MB/s)(50.3MiB/3015msec) 00:11:52.488 slat (usec): min=8, max=105, avg=13.76, stdev= 3.70 00:11:52.488 clat (usec): min=140, max=7456, avg=218.78, stdev=81.43 00:11:52.488 lat (usec): min=151, max=7469, avg=232.54, stdev=81.53 00:11:52.488 clat percentiles (usec): 00:11:52.488 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 174], 00:11:52.488 | 30.00th=[ 184], 40.00th=[ 221], 50.00th=[ 231], 60.00th=[ 237], 00:11:52.488 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 260], 95.00th=[ 269], 00:11:52.488 | 99.00th=[ 289], 99.50th=[ 314], 99.90th=[ 433], 99.95th=[ 758], 00:11:52.488 | 99.99th=[ 3326] 00:11:52.488 bw ( KiB/s): min=15456, max=21048, per=25.04%, avg=17140.00, stdev=2543.06, samples=6 00:11:52.488 iops : min= 3864, max= 5262, avg=4285.00, stdev=635.76, samples=6 00:11:52.488 lat (usec) : 250=80.98%, 500=18.93%, 750=0.02%, 1000=0.03% 00:11:52.488 lat (msec) : 2=0.02%, 4=0.01%, 10=0.01% 00:11:52.488 cpu : usr=1.23%, sys=5.71%, ctx=12886, majf=0, minf=2 00:11:52.488 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:52.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.488 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.488 issued rwts: total=12885,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:52.488 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:52.488 00:11:52.488 Run status group 0 (all jobs): 00:11:52.488 READ: bw=66.8MiB/s (70.1MB/s), 16.7MiB/s-20.6MiB/s (17.5MB/s-21.6MB/s), io=262MiB (274MB), run=3015-3914msec 00:11:52.488 00:11:52.488 Disk stats (read/write): 00:11:52.488 nvme0n1: ios=16072/0, merge=0/0, ticks=3246/0, in_queue=3246, util=95.35% 00:11:52.488 nvme0n2: ios=20340/0, merge=0/0, ticks=3530/0, in_queue=3530, util=95.66% 00:11:52.488 nvme0n3: ios=16489/0, merge=0/0, ticks=2879/0, in_queue=2879, util=96.24% 00:11:52.488 nvme0n4: ios=12326/0, merge=0/0, ticks=2664/0, in_queue=2664, util=96.53% 00:11:52.488 15:02:23 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:52.488 15:02:23 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:52.748 15:02:23 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:52.748 15:02:23 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:53.006 15:02:23 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:53.006 15:02:23 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:53.264 15:02:23 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:53.264 15:02:23 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:53.526 15:02:24 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:53.526 15:02:24 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:53.793 15:02:24 -- target/fio.sh@69 -- # fio_status=0 00:11:53.793 15:02:24 -- target/fio.sh@70 -- # wait 75707 00:11:53.793 15:02:24 -- target/fio.sh@70 -- # fio_status=4 00:11:53.793 15:02:24 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:53.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.793 15:02:24 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:53.793 15:02:24 -- common/autotest_common.sh@1208 -- # local i=0 00:11:53.793 15:02:24 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:11:53.793 15:02:24 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.793 15:02:24 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:11:53.793 15:02:24 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:53.793 nvmf hotplug test: fio failed as expected 00:11:53.793 15:02:24 -- common/autotest_common.sh@1220 -- # return 0 00:11:53.793 15:02:24 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:53.793 15:02:24 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:53.793 15:02:24 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.053 15:02:24 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:54.053 15:02:24 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:54.053 15:02:24 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:54.053 15:02:24 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:54.053 15:02:24 -- target/fio.sh@91 -- # nvmftestfini 00:11:54.053 15:02:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:54.053 15:02:24 -- nvmf/common.sh@116 -- # sync 00:11:54.313 15:02:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:54.313 15:02:24 -- nvmf/common.sh@119 -- # set +e 00:11:54.313 15:02:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:54.313 15:02:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:54.313 rmmod nvme_tcp 00:11:54.313 rmmod nvme_fabrics 00:11:54.313 rmmod nvme_keyring 00:11:54.313 15:02:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:54.313 15:02:24 -- nvmf/common.sh@123 -- # set -e 00:11:54.313 15:02:24 -- nvmf/common.sh@124 -- # return 0 00:11:54.313 15:02:24 -- nvmf/common.sh@477 -- # '[' -n 75325 ']' 00:11:54.313 15:02:24 -- nvmf/common.sh@478 -- # killprocess 75325 00:11:54.313 15:02:24 -- common/autotest_common.sh@936 -- # '[' -z 75325 ']' 00:11:54.313 15:02:24 -- common/autotest_common.sh@940 -- # kill -0 75325 00:11:54.313 15:02:24 -- common/autotest_common.sh@941 -- # uname 00:11:54.313 15:02:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:54.313 15:02:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75325 00:11:54.313 killing process with pid 75325 00:11:54.313 15:02:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:54.313 15:02:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:54.313 15:02:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75325' 00:11:54.313 15:02:24 -- common/autotest_common.sh@955 -- # kill 75325 00:11:54.313 15:02:24 -- common/autotest_common.sh@960 -- # wait 75325 00:11:54.572 15:02:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:54.572 15:02:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:54.572 15:02:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:54.572 15:02:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:54.572 15:02:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:54.572 15:02:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.572 15:02:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:54.572 15:02:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.572 15:02:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:54.572 ************************************ 00:11:54.572 END TEST nvmf_fio_target 00:11:54.572 ************************************ 00:11:54.572 00:11:54.572 real 0m19.291s 00:11:54.572 user 1m12.376s 00:11:54.572 sys 0m11.000s 00:11:54.572 15:02:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:54.572 15:02:25 -- common/autotest_common.sh@10 -- # set +x 00:11:54.572 15:02:25 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:54.572 15:02:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:54.572 15:02:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:54.572 15:02:25 -- common/autotest_common.sh@10 -- # set +x 00:11:54.572 ************************************ 00:11:54.572 START TEST nvmf_bdevio 00:11:54.572 ************************************ 00:11:54.572 15:02:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:54.572 * Looking for test storage... 00:11:54.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:54.572 15:02:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:54.572 15:02:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:54.572 15:02:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:54.831 15:02:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:54.831 15:02:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:54.831 15:02:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:54.831 15:02:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:54.831 15:02:25 -- scripts/common.sh@335 -- # IFS=.-: 00:11:54.831 15:02:25 -- scripts/common.sh@335 -- # read -ra ver1 00:11:54.831 15:02:25 -- scripts/common.sh@336 -- # IFS=.-: 00:11:54.831 15:02:25 -- scripts/common.sh@336 -- # read -ra ver2 00:11:54.831 15:02:25 -- scripts/common.sh@337 -- # local 'op=<' 00:11:54.831 15:02:25 -- scripts/common.sh@339 -- # ver1_l=2 00:11:54.831 15:02:25 -- scripts/common.sh@340 -- # ver2_l=1 00:11:54.831 15:02:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:54.831 15:02:25 -- scripts/common.sh@343 -- # case "$op" in 00:11:54.831 15:02:25 -- scripts/common.sh@344 -- # : 1 00:11:54.831 15:02:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:54.831 15:02:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:54.831 15:02:25 -- scripts/common.sh@364 -- # decimal 1 00:11:54.831 15:02:25 -- scripts/common.sh@352 -- # local d=1 00:11:54.831 15:02:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:54.831 15:02:25 -- scripts/common.sh@354 -- # echo 1 00:11:54.831 15:02:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:54.831 15:02:25 -- scripts/common.sh@365 -- # decimal 2 00:11:54.831 15:02:25 -- scripts/common.sh@352 -- # local d=2 00:11:54.831 15:02:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:54.831 15:02:25 -- scripts/common.sh@354 -- # echo 2 00:11:54.831 15:02:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:54.831 15:02:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:54.831 15:02:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:54.831 15:02:25 -- scripts/common.sh@367 -- # return 0 00:11:54.831 15:02:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:54.831 15:02:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:54.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.831 --rc genhtml_branch_coverage=1 00:11:54.831 --rc genhtml_function_coverage=1 00:11:54.831 --rc genhtml_legend=1 00:11:54.831 --rc geninfo_all_blocks=1 00:11:54.831 --rc geninfo_unexecuted_blocks=1 00:11:54.831 00:11:54.831 ' 00:11:54.831 15:02:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:54.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.831 --rc genhtml_branch_coverage=1 00:11:54.831 --rc genhtml_function_coverage=1 00:11:54.831 --rc genhtml_legend=1 00:11:54.831 --rc geninfo_all_blocks=1 00:11:54.831 --rc geninfo_unexecuted_blocks=1 00:11:54.831 00:11:54.831 ' 00:11:54.831 15:02:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:54.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.831 --rc genhtml_branch_coverage=1 00:11:54.831 --rc genhtml_function_coverage=1 00:11:54.831 --rc genhtml_legend=1 00:11:54.831 --rc geninfo_all_blocks=1 00:11:54.831 --rc geninfo_unexecuted_blocks=1 00:11:54.831 00:11:54.831 ' 00:11:54.831 15:02:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:54.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.831 --rc genhtml_branch_coverage=1 00:11:54.831 --rc genhtml_function_coverage=1 00:11:54.831 --rc genhtml_legend=1 00:11:54.831 --rc geninfo_all_blocks=1 00:11:54.831 --rc geninfo_unexecuted_blocks=1 00:11:54.831 00:11:54.831 ' 00:11:54.831 15:02:25 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:54.831 15:02:25 -- nvmf/common.sh@7 -- # uname -s 00:11:54.831 15:02:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.831 15:02:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.831 15:02:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.831 15:02:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.831 15:02:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.831 15:02:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.831 15:02:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.831 15:02:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.831 15:02:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.831 15:02:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.831 15:02:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:11:54.831 15:02:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:11:54.831 15:02:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.831 15:02:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.831 15:02:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:54.831 15:02:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:54.831 15:02:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.831 15:02:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.831 15:02:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.831 15:02:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.831 15:02:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.831 15:02:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.831 15:02:25 -- paths/export.sh@5 -- # export PATH 00:11:54.831 15:02:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.831 15:02:25 -- nvmf/common.sh@46 -- # : 0 00:11:54.831 15:02:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:54.831 15:02:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:54.831 15:02:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:54.831 15:02:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.831 15:02:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.831 15:02:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:54.831 15:02:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:54.831 15:02:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:54.831 15:02:25 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:54.831 15:02:25 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:54.831 15:02:25 -- target/bdevio.sh@14 -- # nvmftestinit 00:11:54.831 15:02:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:54.831 15:02:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.831 15:02:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:54.831 15:02:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:54.831 15:02:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:54.831 15:02:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.831 15:02:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:54.831 15:02:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.831 15:02:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:54.831 15:02:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:54.831 15:02:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:54.831 15:02:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:54.831 15:02:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:54.831 15:02:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:54.832 15:02:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.832 15:02:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:54.832 15:02:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:54.832 15:02:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:54.832 15:02:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:54.832 15:02:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:54.832 15:02:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:54.832 15:02:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.832 15:02:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:54.832 15:02:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:54.832 15:02:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:54.832 15:02:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:54.832 15:02:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:54.832 15:02:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:54.832 Cannot find device "nvmf_tgt_br" 00:11:54.832 15:02:25 -- nvmf/common.sh@154 -- # true 00:11:54.832 15:02:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:54.832 Cannot find device "nvmf_tgt_br2" 00:11:54.832 15:02:25 -- nvmf/common.sh@155 -- # true 00:11:54.832 15:02:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:54.832 15:02:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:54.832 Cannot find device "nvmf_tgt_br" 00:11:54.832 15:02:25 -- nvmf/common.sh@157 -- # true 00:11:54.832 15:02:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:54.832 Cannot find device "nvmf_tgt_br2" 00:11:54.832 15:02:25 -- nvmf/common.sh@158 -- # true 00:11:54.832 15:02:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:54.832 15:02:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:54.832 15:02:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:54.832 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:54.832 15:02:25 -- nvmf/common.sh@161 -- # true 00:11:54.832 15:02:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:54.832 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:54.832 15:02:25 -- nvmf/common.sh@162 -- # true 00:11:54.832 15:02:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:54.832 15:02:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:54.832 15:02:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:54.832 15:02:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:54.832 15:02:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:54.832 15:02:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:55.090 15:02:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:55.090 15:02:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:55.090 15:02:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:55.090 15:02:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:55.090 15:02:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:55.090 15:02:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:55.090 15:02:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:55.090 15:02:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:55.090 15:02:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:55.090 15:02:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:55.090 15:02:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:55.090 15:02:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:55.090 15:02:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:55.090 15:02:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:55.090 15:02:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:55.090 15:02:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:55.090 15:02:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:55.090 15:02:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:55.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:55.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:11:55.090 00:11:55.090 --- 10.0.0.2 ping statistics --- 00:11:55.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.090 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:11:55.090 15:02:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:55.090 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:55.090 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:11:55.090 00:11:55.090 --- 10.0.0.3 ping statistics --- 00:11:55.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.090 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:11:55.090 15:02:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:55.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:55.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:11:55.090 00:11:55.090 --- 10.0.0.1 ping statistics --- 00:11:55.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.090 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:11:55.090 15:02:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:55.090 15:02:25 -- nvmf/common.sh@421 -- # return 0 00:11:55.090 15:02:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:55.090 15:02:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:55.090 15:02:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:55.090 15:02:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:55.090 15:02:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:55.090 15:02:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:55.090 15:02:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:55.091 15:02:25 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:55.091 15:02:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:55.091 15:02:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:55.091 15:02:25 -- common/autotest_common.sh@10 -- # set +x 00:11:55.091 15:02:25 -- nvmf/common.sh@469 -- # nvmfpid=76027 00:11:55.091 15:02:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:55.091 15:02:25 -- nvmf/common.sh@470 -- # waitforlisten 76027 00:11:55.091 15:02:25 -- common/autotest_common.sh@829 -- # '[' -z 76027 ']' 00:11:55.091 15:02:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.091 15:02:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:55.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.091 15:02:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.091 15:02:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:55.091 15:02:25 -- common/autotest_common.sh@10 -- # set +x 00:11:55.091 [2024-11-20 15:02:25.845066] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:55.091 [2024-11-20 15:02:25.845172] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.350 [2024-11-20 15:02:25.986446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.350 [2024-11-20 15:02:26.025696] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:55.350 [2024-11-20 15:02:26.025870] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.350 [2024-11-20 15:02:26.025885] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.350 [2024-11-20 15:02:26.025896] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.350 [2024-11-20 15:02:26.026039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:55.350 [2024-11-20 15:02:26.026184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.350 [2024-11-20 15:02:26.026089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:55.350 [2024-11-20 15:02:26.026177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:56.285 15:02:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:56.285 15:02:26 -- common/autotest_common.sh@862 -- # return 0 00:11:56.285 15:02:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:56.285 15:02:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:56.285 15:02:26 -- common/autotest_common.sh@10 -- # set +x 00:11:56.285 15:02:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.285 15:02:26 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:56.285 15:02:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.285 15:02:26 -- common/autotest_common.sh@10 -- # set +x 00:11:56.285 [2024-11-20 15:02:26.921313] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.285 15:02:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.285 15:02:26 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:56.285 15:02:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.285 15:02:26 -- common/autotest_common.sh@10 -- # set +x 00:11:56.285 Malloc0 00:11:56.285 15:02:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.285 15:02:26 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:56.285 15:02:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.285 15:02:26 -- common/autotest_common.sh@10 -- # set +x 00:11:56.285 15:02:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.285 15:02:26 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:56.285 15:02:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.285 15:02:26 -- common/autotest_common.sh@10 -- # set +x 00:11:56.285 15:02:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.285 15:02:26 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:56.285 15:02:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.285 15:02:26 -- common/autotest_common.sh@10 -- # set +x 00:11:56.285 [2024-11-20 15:02:26.985671] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:56.285 15:02:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.285 15:02:26 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:56.285 15:02:26 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:56.285 15:02:26 -- nvmf/common.sh@520 -- # config=() 00:11:56.285 15:02:26 -- nvmf/common.sh@520 -- # local subsystem config 00:11:56.285 15:02:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:11:56.285 15:02:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:11:56.285 { 00:11:56.285 "params": { 00:11:56.285 "name": "Nvme$subsystem", 00:11:56.285 "trtype": "$TEST_TRANSPORT", 00:11:56.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:56.285 "adrfam": "ipv4", 00:11:56.285 "trsvcid": "$NVMF_PORT", 00:11:56.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:56.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:56.285 "hdgst": ${hdgst:-false}, 00:11:56.285 "ddgst": ${ddgst:-false} 00:11:56.285 }, 00:11:56.285 "method": "bdev_nvme_attach_controller" 00:11:56.285 } 00:11:56.285 EOF 00:11:56.285 )") 00:11:56.285 15:02:26 -- nvmf/common.sh@542 -- # cat 00:11:56.285 15:02:26 -- nvmf/common.sh@544 -- # jq . 00:11:56.285 15:02:27 -- nvmf/common.sh@545 -- # IFS=, 00:11:56.285 15:02:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:11:56.285 "params": { 00:11:56.285 "name": "Nvme1", 00:11:56.285 "trtype": "tcp", 00:11:56.285 "traddr": "10.0.0.2", 00:11:56.285 "adrfam": "ipv4", 00:11:56.285 "trsvcid": "4420", 00:11:56.285 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:56.285 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:56.285 "hdgst": false, 00:11:56.285 "ddgst": false 00:11:56.285 }, 00:11:56.285 "method": "bdev_nvme_attach_controller" 00:11:56.285 }' 00:11:56.285 [2024-11-20 15:02:27.030662] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:56.285 [2024-11-20 15:02:27.031186] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76069 ] 00:11:56.543 [2024-11-20 15:02:27.196753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:56.543 [2024-11-20 15:02:27.247594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.543 [2024-11-20 15:02:27.247725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.543 [2024-11-20 15:02:27.247735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.803 [2024-11-20 15:02:27.396305] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:56.803 [2024-11-20 15:02:27.396360] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:56.803 I/O targets: 00:11:56.803 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:56.803 00:11:56.803 00:11:56.803 CUnit - A unit testing framework for C - Version 2.1-3 00:11:56.803 http://cunit.sourceforge.net/ 00:11:56.803 00:11:56.803 00:11:56.803 Suite: bdevio tests on: Nvme1n1 00:11:56.803 Test: blockdev write read block ...passed 00:11:56.803 Test: blockdev write zeroes read block ...passed 00:11:56.803 Test: blockdev write zeroes read no split ...passed 00:11:56.803 Test: blockdev write zeroes read split ...passed 00:11:56.803 Test: blockdev write zeroes read split partial ...passed 00:11:56.803 Test: blockdev reset ...[2024-11-20 15:02:27.432242] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:56.803 [2024-11-20 15:02:27.432379] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa272a0 (9): Bad file descriptor 00:11:56.803 [2024-11-20 15:02:27.445987] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:56.803 passed 00:11:56.803 Test: blockdev write read 8 blocks ...passed 00:11:56.803 Test: blockdev write read size > 128k ...passed 00:11:56.803 Test: blockdev write read invalid size ...passed 00:11:56.803 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:56.803 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:56.803 Test: blockdev write read max offset ...passed 00:11:56.803 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:56.803 Test: blockdev writev readv 8 blocks ...passed 00:11:56.803 Test: blockdev writev readv 30 x 1block ...passed 00:11:56.803 Test: blockdev writev readv block ...passed 00:11:56.803 Test: blockdev writev readv size > 128k ...passed 00:11:56.803 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:56.803 Test: blockdev comparev and writev ...[2024-11-20 15:02:27.456238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:56.803 [2024-11-20 15:02:27.456325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:56.803 [2024-11-20 15:02:27.456353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:56.803 [2024-11-20 15:02:27.456367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:56.803 [2024-11-20 15:02:27.456782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:56.803 [2024-11-20 15:02:27.456819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:56.803 [2024-11-20 15:02:27.456842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:56.803 [2024-11-20 15:02:27.456855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:56.803 [2024-11-20 15:02:27.457211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:56.803 [2024-11-20 15:02:27.457250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:56.803 [2024-11-20 15:02:27.457273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:56.803 [2024-11-20 15:02:27.457285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:56.803 [2024-11-20 15:02:27.457688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:56.803 [2024-11-20 15:02:27.457730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:56.803 [2024-11-20 15:02:27.457753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:56.803 [2024-11-20 15:02:27.457766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:56.803 passed 00:11:56.803 Test: blockdev nvme passthru rw ...passed 00:11:56.803 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:02:27.459154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:56.803 [2024-11-20 15:02:27.459419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:56.803 [2024-11-20 15:02:27.459561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:56.803 [2024-11-20 15:02:27.459886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:56.803 [2024-11-20 15:02:27.460158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:56.803 [2024-11-20 15:02:27.460308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:56.803 [2024-11-20 15:02:27.460676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:56.803 [2024-11-20 15:02:27.460716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:56.803 passed 00:11:56.803 Test: blockdev nvme admin passthru ...passed 00:11:56.803 Test: blockdev copy ...passed 00:11:56.803 00:11:56.803 Run Summary: Type Total Ran Passed Failed Inactive 00:11:56.803 suites 1 1 n/a 0 0 00:11:56.803 tests 23 23 23 0 0 00:11:56.803 asserts 152 152 152 0 n/a 00:11:56.803 00:11:56.803 Elapsed time = 0.146 seconds 00:11:57.062 15:02:27 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.062 15:02:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.062 15:02:27 -- common/autotest_common.sh@10 -- # set +x 00:11:57.062 15:02:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.062 15:02:27 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:57.062 15:02:27 -- target/bdevio.sh@30 -- # nvmftestfini 00:11:57.062 15:02:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:57.062 15:02:27 -- nvmf/common.sh@116 -- # sync 00:11:57.062 15:02:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:57.062 15:02:27 -- nvmf/common.sh@119 -- # set +e 00:11:57.062 15:02:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:57.062 15:02:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:57.062 rmmod nvme_tcp 00:11:57.062 rmmod nvme_fabrics 00:11:57.062 rmmod nvme_keyring 00:11:57.062 15:02:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:57.062 15:02:27 -- nvmf/common.sh@123 -- # set -e 00:11:57.062 15:02:27 -- nvmf/common.sh@124 -- # return 0 00:11:57.062 15:02:27 -- nvmf/common.sh@477 -- # '[' -n 76027 ']' 00:11:57.062 15:02:27 -- nvmf/common.sh@478 -- # killprocess 76027 00:11:57.062 15:02:27 -- common/autotest_common.sh@936 -- # '[' -z 76027 ']' 00:11:57.062 15:02:27 -- common/autotest_common.sh@940 -- # kill -0 76027 00:11:57.062 15:02:27 -- common/autotest_common.sh@941 -- # uname 00:11:57.062 15:02:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:57.062 15:02:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76027 00:11:57.062 killing process with pid 76027 00:11:57.062 15:02:27 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:11:57.062 15:02:27 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:11:57.062 15:02:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76027' 00:11:57.062 15:02:27 -- common/autotest_common.sh@955 -- # kill 76027 00:11:57.062 15:02:27 -- common/autotest_common.sh@960 -- # wait 76027 00:11:57.320 15:02:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:57.320 15:02:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:57.320 15:02:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:57.320 15:02:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:57.320 15:02:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:57.320 15:02:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.320 15:02:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:57.320 15:02:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.320 15:02:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:57.320 00:11:57.320 real 0m2.738s 00:11:57.320 user 0m8.807s 00:11:57.320 sys 0m0.662s 00:11:57.320 15:02:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:57.320 15:02:27 -- common/autotest_common.sh@10 -- # set +x 00:11:57.320 ************************************ 00:11:57.320 END TEST nvmf_bdevio 00:11:57.320 ************************************ 00:11:57.320 15:02:27 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:11:57.320 15:02:27 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:57.320 15:02:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:57.320 15:02:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:57.320 15:02:27 -- common/autotest_common.sh@10 -- # set +x 00:11:57.320 ************************************ 00:11:57.320 START TEST nvmf_bdevio_no_huge 00:11:57.320 ************************************ 00:11:57.320 15:02:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:57.320 * Looking for test storage... 00:11:57.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:57.320 15:02:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:57.320 15:02:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:57.320 15:02:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:57.580 15:02:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:57.580 15:02:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:57.580 15:02:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:57.580 15:02:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:57.580 15:02:28 -- scripts/common.sh@335 -- # IFS=.-: 00:11:57.580 15:02:28 -- scripts/common.sh@335 -- # read -ra ver1 00:11:57.580 15:02:28 -- scripts/common.sh@336 -- # IFS=.-: 00:11:57.580 15:02:28 -- scripts/common.sh@336 -- # read -ra ver2 00:11:57.580 15:02:28 -- scripts/common.sh@337 -- # local 'op=<' 00:11:57.580 15:02:28 -- scripts/common.sh@339 -- # ver1_l=2 00:11:57.580 15:02:28 -- scripts/common.sh@340 -- # ver2_l=1 00:11:57.580 15:02:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:57.580 15:02:28 -- scripts/common.sh@343 -- # case "$op" in 00:11:57.580 15:02:28 -- scripts/common.sh@344 -- # : 1 00:11:57.580 15:02:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:57.580 15:02:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:57.580 15:02:28 -- scripts/common.sh@364 -- # decimal 1 00:11:57.580 15:02:28 -- scripts/common.sh@352 -- # local d=1 00:11:57.580 15:02:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:57.580 15:02:28 -- scripts/common.sh@354 -- # echo 1 00:11:57.580 15:02:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:57.580 15:02:28 -- scripts/common.sh@365 -- # decimal 2 00:11:57.580 15:02:28 -- scripts/common.sh@352 -- # local d=2 00:11:57.580 15:02:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:57.580 15:02:28 -- scripts/common.sh@354 -- # echo 2 00:11:57.580 15:02:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:57.580 15:02:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:57.580 15:02:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:57.580 15:02:28 -- scripts/common.sh@367 -- # return 0 00:11:57.580 15:02:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:57.580 15:02:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:57.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.580 --rc genhtml_branch_coverage=1 00:11:57.580 --rc genhtml_function_coverage=1 00:11:57.580 --rc genhtml_legend=1 00:11:57.580 --rc geninfo_all_blocks=1 00:11:57.580 --rc geninfo_unexecuted_blocks=1 00:11:57.580 00:11:57.580 ' 00:11:57.580 15:02:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:57.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.580 --rc genhtml_branch_coverage=1 00:11:57.580 --rc genhtml_function_coverage=1 00:11:57.580 --rc genhtml_legend=1 00:11:57.580 --rc geninfo_all_blocks=1 00:11:57.580 --rc geninfo_unexecuted_blocks=1 00:11:57.580 00:11:57.580 ' 00:11:57.580 15:02:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:57.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.580 --rc genhtml_branch_coverage=1 00:11:57.580 --rc genhtml_function_coverage=1 00:11:57.580 --rc genhtml_legend=1 00:11:57.580 --rc geninfo_all_blocks=1 00:11:57.580 --rc geninfo_unexecuted_blocks=1 00:11:57.580 00:11:57.580 ' 00:11:57.580 15:02:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:57.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.580 --rc genhtml_branch_coverage=1 00:11:57.580 --rc genhtml_function_coverage=1 00:11:57.580 --rc genhtml_legend=1 00:11:57.580 --rc geninfo_all_blocks=1 00:11:57.580 --rc geninfo_unexecuted_blocks=1 00:11:57.580 00:11:57.580 ' 00:11:57.580 15:02:28 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:57.580 15:02:28 -- nvmf/common.sh@7 -- # uname -s 00:11:57.580 15:02:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.580 15:02:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.580 15:02:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.580 15:02:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.580 15:02:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.580 15:02:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.580 15:02:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.580 15:02:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.580 15:02:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.580 15:02:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.580 15:02:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:11:57.580 15:02:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:11:57.580 15:02:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.580 15:02:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.580 15:02:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:57.580 15:02:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:57.580 15:02:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.580 15:02:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.580 15:02:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.581 15:02:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.581 15:02:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.581 15:02:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.581 15:02:28 -- paths/export.sh@5 -- # export PATH 00:11:57.581 15:02:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.581 15:02:28 -- nvmf/common.sh@46 -- # : 0 00:11:57.581 15:02:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:57.581 15:02:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:57.581 15:02:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:57.581 15:02:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.581 15:02:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.581 15:02:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:57.581 15:02:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:57.581 15:02:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:57.581 15:02:28 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:57.581 15:02:28 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:57.581 15:02:28 -- target/bdevio.sh@14 -- # nvmftestinit 00:11:57.581 15:02:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:57.581 15:02:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:57.581 15:02:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:57.581 15:02:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:57.581 15:02:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:57.581 15:02:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.581 15:02:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:57.581 15:02:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.581 15:02:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:57.581 15:02:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:57.581 15:02:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:57.581 15:02:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:57.581 15:02:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:57.581 15:02:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:57.581 15:02:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:57.581 15:02:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:57.581 15:02:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:57.581 15:02:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:57.581 15:02:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:57.581 15:02:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:57.581 15:02:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:57.581 15:02:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:57.581 15:02:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:57.581 15:02:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:57.581 15:02:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:57.581 15:02:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:57.581 15:02:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:57.581 15:02:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:57.581 Cannot find device "nvmf_tgt_br" 00:11:57.581 15:02:28 -- nvmf/common.sh@154 -- # true 00:11:57.581 15:02:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:57.581 Cannot find device "nvmf_tgt_br2" 00:11:57.581 15:02:28 -- nvmf/common.sh@155 -- # true 00:11:57.581 15:02:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:57.581 15:02:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:57.581 Cannot find device "nvmf_tgt_br" 00:11:57.581 15:02:28 -- nvmf/common.sh@157 -- # true 00:11:57.581 15:02:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:57.581 Cannot find device "nvmf_tgt_br2" 00:11:57.581 15:02:28 -- nvmf/common.sh@158 -- # true 00:11:57.581 15:02:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:57.581 15:02:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:57.581 15:02:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:57.581 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:57.581 15:02:28 -- nvmf/common.sh@161 -- # true 00:11:57.581 15:02:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:57.581 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:57.581 15:02:28 -- nvmf/common.sh@162 -- # true 00:11:57.581 15:02:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:57.581 15:02:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:57.581 15:02:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:57.581 15:02:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:57.581 15:02:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:57.581 15:02:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:57.581 15:02:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:57.581 15:02:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:57.581 15:02:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:57.581 15:02:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:57.581 15:02:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:57.581 15:02:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:57.581 15:02:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:57.581 15:02:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:57.840 15:02:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:57.840 15:02:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:57.840 15:02:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:57.840 15:02:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:57.840 15:02:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:57.840 15:02:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:57.840 15:02:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:57.840 15:02:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:57.840 15:02:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:57.840 15:02:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:57.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.840 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:11:57.840 00:11:57.840 --- 10.0.0.2 ping statistics --- 00:11:57.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.840 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:11:57.840 15:02:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:57.840 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:57.840 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:11:57.840 00:11:57.840 --- 10.0.0.3 ping statistics --- 00:11:57.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.840 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:11:57.840 15:02:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:57.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:11:57.840 00:11:57.840 --- 10.0.0.1 ping statistics --- 00:11:57.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.840 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:11:57.840 15:02:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.840 15:02:28 -- nvmf/common.sh@421 -- # return 0 00:11:57.840 15:02:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:57.840 15:02:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.840 15:02:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:57.840 15:02:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:57.840 15:02:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.840 15:02:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:57.840 15:02:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:57.840 15:02:28 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:57.840 15:02:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:57.840 15:02:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:57.840 15:02:28 -- common/autotest_common.sh@10 -- # set +x 00:11:57.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.840 15:02:28 -- nvmf/common.sh@469 -- # nvmfpid=76244 00:11:57.840 15:02:28 -- nvmf/common.sh@470 -- # waitforlisten 76244 00:11:57.840 15:02:28 -- common/autotest_common.sh@829 -- # '[' -z 76244 ']' 00:11:57.840 15:02:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.841 15:02:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:11:57.841 15:02:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:57.841 15:02:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.841 15:02:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:57.841 15:02:28 -- common/autotest_common.sh@10 -- # set +x 00:11:57.841 [2024-11-20 15:02:28.552166] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:57.841 [2024-11-20 15:02:28.552307] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:11:58.099 [2024-11-20 15:02:28.713568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:58.099 [2024-11-20 15:02:28.807500] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:58.099 [2024-11-20 15:02:28.807653] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.099 [2024-11-20 15:02:28.807668] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.099 [2024-11-20 15:02:28.807677] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.099 [2024-11-20 15:02:28.807788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:58.099 [2024-11-20 15:02:28.807877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:58.099 [2024-11-20 15:02:28.807955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:58.099 [2024-11-20 15:02:28.807956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.033 15:02:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:59.033 15:02:29 -- common/autotest_common.sh@862 -- # return 0 00:11:59.033 15:02:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:59.033 15:02:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:59.033 15:02:29 -- common/autotest_common.sh@10 -- # set +x 00:11:59.033 15:02:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:59.033 15:02:29 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:59.033 15:02:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.033 15:02:29 -- common/autotest_common.sh@10 -- # set +x 00:11:59.033 [2024-11-20 15:02:29.726140] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:59.033 15:02:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.033 15:02:29 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:59.033 15:02:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.033 15:02:29 -- common/autotest_common.sh@10 -- # set +x 00:11:59.033 Malloc0 00:11:59.033 15:02:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.033 15:02:29 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:59.033 15:02:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.033 15:02:29 -- common/autotest_common.sh@10 -- # set +x 00:11:59.033 15:02:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.033 15:02:29 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:59.033 15:02:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.033 15:02:29 -- common/autotest_common.sh@10 -- # set +x 00:11:59.033 15:02:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.033 15:02:29 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.033 15:02:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.033 15:02:29 -- common/autotest_common.sh@10 -- # set +x 00:11:59.033 [2024-11-20 15:02:29.770468] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.033 15:02:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.033 15:02:29 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:11:59.033 15:02:29 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:59.033 15:02:29 -- nvmf/common.sh@520 -- # config=() 00:11:59.033 15:02:29 -- nvmf/common.sh@520 -- # local subsystem config 00:11:59.033 15:02:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:11:59.033 15:02:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:11:59.033 { 00:11:59.033 "params": { 00:11:59.033 "name": "Nvme$subsystem", 00:11:59.033 "trtype": "$TEST_TRANSPORT", 00:11:59.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:59.033 "adrfam": "ipv4", 00:11:59.033 "trsvcid": "$NVMF_PORT", 00:11:59.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:59.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:59.033 "hdgst": ${hdgst:-false}, 00:11:59.033 "ddgst": ${ddgst:-false} 00:11:59.033 }, 00:11:59.033 "method": "bdev_nvme_attach_controller" 00:11:59.033 } 00:11:59.033 EOF 00:11:59.033 )") 00:11:59.033 15:02:29 -- nvmf/common.sh@542 -- # cat 00:11:59.033 15:02:29 -- nvmf/common.sh@544 -- # jq . 00:11:59.033 15:02:29 -- nvmf/common.sh@545 -- # IFS=, 00:11:59.033 15:02:29 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:11:59.033 "params": { 00:11:59.033 "name": "Nvme1", 00:11:59.033 "trtype": "tcp", 00:11:59.033 "traddr": "10.0.0.2", 00:11:59.033 "adrfam": "ipv4", 00:11:59.033 "trsvcid": "4420", 00:11:59.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:59.033 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:59.033 "hdgst": false, 00:11:59.033 "ddgst": false 00:11:59.033 }, 00:11:59.033 "method": "bdev_nvme_attach_controller" 00:11:59.033 }' 00:11:59.291 [2024-11-20 15:02:29.836785] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:11:59.291 [2024-11-20 15:02:29.836942] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid76280 ] 00:11:59.291 [2024-11-20 15:02:29.988467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:59.549 [2024-11-20 15:02:30.104660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.549 [2024-11-20 15:02:30.104745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.549 [2024-11-20 15:02:30.104755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.549 [2024-11-20 15:02:30.250750] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:59.549 [2024-11-20 15:02:30.250986] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:59.549 I/O targets: 00:11:59.549 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:59.549 00:11:59.549 00:11:59.549 CUnit - A unit testing framework for C - Version 2.1-3 00:11:59.549 http://cunit.sourceforge.net/ 00:11:59.549 00:11:59.549 00:11:59.549 Suite: bdevio tests on: Nvme1n1 00:11:59.549 Test: blockdev write read block ...passed 00:11:59.549 Test: blockdev write zeroes read block ...passed 00:11:59.549 Test: blockdev write zeroes read no split ...passed 00:11:59.549 Test: blockdev write zeroes read split ...passed 00:11:59.549 Test: blockdev write zeroes read split partial ...passed 00:11:59.549 Test: blockdev reset ...[2024-11-20 15:02:30.295361] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:59.549 [2024-11-20 15:02:30.295529] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c03760 (9): Bad file descriptor 00:11:59.549 passed 00:11:59.549 Test: blockdev write read 8 blocks ...[2024-11-20 15:02:30.310098] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:59.549 passed 00:11:59.549 Test: blockdev write read size > 128k ...passed 00:11:59.549 Test: blockdev write read invalid size ...passed 00:11:59.549 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:59.549 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:59.549 Test: blockdev write read max offset ...passed 00:11:59.549 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:59.549 Test: blockdev writev readv 8 blocks ...passed 00:11:59.549 Test: blockdev writev readv 30 x 1block ...passed 00:11:59.549 Test: blockdev writev readv block ...passed 00:11:59.549 Test: blockdev writev readv size > 128k ...passed 00:11:59.549 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:59.549 Test: blockdev comparev and writev ...[2024-11-20 15:02:30.322946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:59.549 [2024-11-20 15:02:30.323020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:59.549 [2024-11-20 15:02:30.323058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:59.549 [2024-11-20 15:02:30.323078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:59.549 [2024-11-20 15:02:30.323762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:59.549 [2024-11-20 15:02:30.323812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:59.549 [2024-11-20 15:02:30.323848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:59.549 [2024-11-20 15:02:30.323867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:59.549 [2024-11-20 15:02:30.324350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:59.549 [2024-11-20 15:02:30.324396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:59.549 [2024-11-20 15:02:30.324432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:59.549 [2024-11-20 15:02:30.324451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:59.549 [2024-11-20 15:02:30.325054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:59.549 [2024-11-20 15:02:30.325100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:59.549 [2024-11-20 15:02:30.325139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:59.549 [2024-11-20 15:02:30.325158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:59.549 passed 00:11:59.549 Test: blockdev nvme passthru rw ...passed 00:11:59.549 Test: blockdev nvme passthru vendor specific ...[2024-11-20 15:02:30.326743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:59.549 [2024-11-20 15:02:30.326792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:59.549 [2024-11-20 15:02:30.327007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:59.549 [2024-11-20 15:02:30.327039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:59.549 [2024-11-20 15:02:30.327411] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:59.549 [2024-11-20 15:02:30.327457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:59.550 [2024-11-20 15:02:30.327669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:59.550 [2024-11-20 15:02:30.327701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:59.550 passed 00:11:59.550 Test: blockdev nvme admin passthru ...passed 00:11:59.550 Test: blockdev copy ...passed 00:11:59.550 00:11:59.550 Run Summary: Type Total Ran Passed Failed Inactive 00:11:59.550 suites 1 1 n/a 0 0 00:11:59.550 tests 23 23 23 0 0 00:11:59.550 asserts 152 152 152 0 n/a 00:11:59.550 00:11:59.550 Elapsed time = 0.182 seconds 00:12:00.122 15:02:30 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.122 15:02:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.122 15:02:30 -- common/autotest_common.sh@10 -- # set +x 00:12:00.122 15:02:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.122 15:02:30 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:00.122 15:02:30 -- target/bdevio.sh@30 -- # nvmftestfini 00:12:00.122 15:02:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:00.122 15:02:30 -- nvmf/common.sh@116 -- # sync 00:12:00.122 15:02:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:00.122 15:02:30 -- nvmf/common.sh@119 -- # set +e 00:12:00.122 15:02:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:00.122 15:02:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:00.122 rmmod nvme_tcp 00:12:00.122 rmmod nvme_fabrics 00:12:00.122 rmmod nvme_keyring 00:12:00.122 15:02:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:00.122 15:02:30 -- nvmf/common.sh@123 -- # set -e 00:12:00.122 15:02:30 -- nvmf/common.sh@124 -- # return 0 00:12:00.122 15:02:30 -- nvmf/common.sh@477 -- # '[' -n 76244 ']' 00:12:00.122 15:02:30 -- nvmf/common.sh@478 -- # killprocess 76244 00:12:00.122 15:02:30 -- common/autotest_common.sh@936 -- # '[' -z 76244 ']' 00:12:00.122 15:02:30 -- common/autotest_common.sh@940 -- # kill -0 76244 00:12:00.122 15:02:30 -- common/autotest_common.sh@941 -- # uname 00:12:00.122 15:02:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:00.122 15:02:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76244 00:12:00.122 killing process with pid 76244 00:12:00.122 15:02:30 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:12:00.122 15:02:30 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:12:00.122 15:02:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76244' 00:12:00.122 15:02:30 -- common/autotest_common.sh@955 -- # kill 76244 00:12:00.122 15:02:30 -- common/autotest_common.sh@960 -- # wait 76244 00:12:00.689 15:02:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:00.689 15:02:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:00.689 15:02:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:00.689 15:02:31 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:00.689 15:02:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:00.689 15:02:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.689 15:02:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:00.689 15:02:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.689 15:02:31 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:00.689 00:12:00.689 real 0m3.396s 00:12:00.689 user 0m11.116s 00:12:00.689 sys 0m1.364s 00:12:00.689 15:02:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:00.689 ************************************ 00:12:00.689 END TEST nvmf_bdevio_no_huge 00:12:00.689 ************************************ 00:12:00.689 15:02:31 -- common/autotest_common.sh@10 -- # set +x 00:12:00.689 15:02:31 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:00.689 15:02:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:00.689 15:02:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:00.689 15:02:31 -- common/autotest_common.sh@10 -- # set +x 00:12:00.689 ************************************ 00:12:00.689 START TEST nvmf_tls 00:12:00.689 ************************************ 00:12:00.689 15:02:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:00.689 * Looking for test storage... 00:12:00.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:00.689 15:02:31 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:00.689 15:02:31 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:00.689 15:02:31 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:00.979 15:02:31 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:00.979 15:02:31 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:00.979 15:02:31 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:00.979 15:02:31 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:00.979 15:02:31 -- scripts/common.sh@335 -- # IFS=.-: 00:12:00.979 15:02:31 -- scripts/common.sh@335 -- # read -ra ver1 00:12:00.979 15:02:31 -- scripts/common.sh@336 -- # IFS=.-: 00:12:00.979 15:02:31 -- scripts/common.sh@336 -- # read -ra ver2 00:12:00.979 15:02:31 -- scripts/common.sh@337 -- # local 'op=<' 00:12:00.979 15:02:31 -- scripts/common.sh@339 -- # ver1_l=2 00:12:00.979 15:02:31 -- scripts/common.sh@340 -- # ver2_l=1 00:12:00.979 15:02:31 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:00.979 15:02:31 -- scripts/common.sh@343 -- # case "$op" in 00:12:00.979 15:02:31 -- scripts/common.sh@344 -- # : 1 00:12:00.979 15:02:31 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:00.979 15:02:31 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:00.979 15:02:31 -- scripts/common.sh@364 -- # decimal 1 00:12:00.979 15:02:31 -- scripts/common.sh@352 -- # local d=1 00:12:00.979 15:02:31 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:00.979 15:02:31 -- scripts/common.sh@354 -- # echo 1 00:12:00.979 15:02:31 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:00.979 15:02:31 -- scripts/common.sh@365 -- # decimal 2 00:12:00.979 15:02:31 -- scripts/common.sh@352 -- # local d=2 00:12:00.979 15:02:31 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:00.979 15:02:31 -- scripts/common.sh@354 -- # echo 2 00:12:00.979 15:02:31 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:00.979 15:02:31 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:00.979 15:02:31 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:00.979 15:02:31 -- scripts/common.sh@367 -- # return 0 00:12:00.979 15:02:31 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:00.979 15:02:31 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:00.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.979 --rc genhtml_branch_coverage=1 00:12:00.979 --rc genhtml_function_coverage=1 00:12:00.979 --rc genhtml_legend=1 00:12:00.979 --rc geninfo_all_blocks=1 00:12:00.979 --rc geninfo_unexecuted_blocks=1 00:12:00.979 00:12:00.979 ' 00:12:00.979 15:02:31 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:00.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.979 --rc genhtml_branch_coverage=1 00:12:00.979 --rc genhtml_function_coverage=1 00:12:00.979 --rc genhtml_legend=1 00:12:00.979 --rc geninfo_all_blocks=1 00:12:00.979 --rc geninfo_unexecuted_blocks=1 00:12:00.979 00:12:00.979 ' 00:12:00.979 15:02:31 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:00.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.979 --rc genhtml_branch_coverage=1 00:12:00.979 --rc genhtml_function_coverage=1 00:12:00.979 --rc genhtml_legend=1 00:12:00.979 --rc geninfo_all_blocks=1 00:12:00.979 --rc geninfo_unexecuted_blocks=1 00:12:00.979 00:12:00.979 ' 00:12:00.979 15:02:31 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:00.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.979 --rc genhtml_branch_coverage=1 00:12:00.979 --rc genhtml_function_coverage=1 00:12:00.979 --rc genhtml_legend=1 00:12:00.979 --rc geninfo_all_blocks=1 00:12:00.979 --rc geninfo_unexecuted_blocks=1 00:12:00.979 00:12:00.979 ' 00:12:00.979 15:02:31 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:00.979 15:02:31 -- nvmf/common.sh@7 -- # uname -s 00:12:00.979 15:02:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.979 15:02:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.979 15:02:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.979 15:02:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.979 15:02:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.979 15:02:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.979 15:02:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.979 15:02:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.979 15:02:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.979 15:02:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.979 15:02:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:12:00.979 15:02:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:12:00.979 15:02:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.979 15:02:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.979 15:02:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:00.979 15:02:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:00.979 15:02:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.979 15:02:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.979 15:02:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.979 15:02:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.979 15:02:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.979 15:02:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.979 15:02:31 -- paths/export.sh@5 -- # export PATH 00:12:00.979 15:02:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.979 15:02:31 -- nvmf/common.sh@46 -- # : 0 00:12:00.979 15:02:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:00.979 15:02:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:00.979 15:02:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:00.980 15:02:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.980 15:02:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.980 15:02:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:00.980 15:02:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:00.980 15:02:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:00.980 15:02:31 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:00.980 15:02:31 -- target/tls.sh@71 -- # nvmftestinit 00:12:00.980 15:02:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:00.980 15:02:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:00.980 15:02:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:00.980 15:02:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:00.980 15:02:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:00.980 15:02:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.980 15:02:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:00.980 15:02:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.980 15:02:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:00.980 15:02:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:00.980 15:02:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:00.980 15:02:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:00.980 15:02:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:00.980 15:02:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:00.980 15:02:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:00.980 15:02:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.980 15:02:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:00.980 15:02:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:00.980 15:02:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:00.980 15:02:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:00.980 15:02:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:00.980 15:02:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:00.980 15:02:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:00.980 15:02:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:00.980 15:02:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:00.980 15:02:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:00.980 15:02:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:00.980 15:02:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:00.980 Cannot find device "nvmf_tgt_br" 00:12:00.980 15:02:31 -- nvmf/common.sh@154 -- # true 00:12:00.980 15:02:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:00.980 Cannot find device "nvmf_tgt_br2" 00:12:00.980 15:02:31 -- nvmf/common.sh@155 -- # true 00:12:00.980 15:02:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:00.980 15:02:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:00.980 Cannot find device "nvmf_tgt_br" 00:12:00.980 15:02:31 -- nvmf/common.sh@157 -- # true 00:12:00.980 15:02:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:00.980 Cannot find device "nvmf_tgt_br2" 00:12:00.980 15:02:31 -- nvmf/common.sh@158 -- # true 00:12:00.980 15:02:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:00.980 15:02:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:00.980 15:02:31 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:00.980 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:00.980 15:02:31 -- nvmf/common.sh@161 -- # true 00:12:00.980 15:02:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:00.980 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:00.980 15:02:31 -- nvmf/common.sh@162 -- # true 00:12:00.980 15:02:31 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:00.980 15:02:31 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:00.980 15:02:31 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:00.980 15:02:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:01.238 15:02:31 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:01.238 15:02:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:01.238 15:02:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:01.238 15:02:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:01.238 15:02:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:01.238 15:02:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:01.238 15:02:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:01.238 15:02:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:01.238 15:02:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:01.238 15:02:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:01.238 15:02:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:01.238 15:02:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:01.238 15:02:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:01.238 15:02:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:01.238 15:02:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:01.238 15:02:31 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:01.238 15:02:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:01.238 15:02:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:01.238 15:02:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:01.238 15:02:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:01.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:01.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:12:01.238 00:12:01.238 --- 10.0.0.2 ping statistics --- 00:12:01.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.238 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:12:01.238 15:02:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:01.238 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:01.238 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:12:01.238 00:12:01.238 --- 10.0.0.3 ping statistics --- 00:12:01.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.238 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:12:01.238 15:02:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:01.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:01.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:12:01.238 00:12:01.238 --- 10.0.0.1 ping statistics --- 00:12:01.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.238 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:12:01.238 15:02:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:01.238 15:02:31 -- nvmf/common.sh@421 -- # return 0 00:12:01.238 15:02:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:01.238 15:02:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:01.238 15:02:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:01.238 15:02:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:01.238 15:02:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:01.238 15:02:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:01.238 15:02:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:01.238 15:02:31 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:01.238 15:02:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:01.238 15:02:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:01.238 15:02:31 -- common/autotest_common.sh@10 -- # set +x 00:12:01.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.238 15:02:31 -- nvmf/common.sh@469 -- # nvmfpid=76474 00:12:01.238 15:02:31 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:01.238 15:02:31 -- nvmf/common.sh@470 -- # waitforlisten 76474 00:12:01.238 15:02:31 -- common/autotest_common.sh@829 -- # '[' -z 76474 ']' 00:12:01.238 15:02:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.238 15:02:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:01.238 15:02:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.238 15:02:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:01.238 15:02:31 -- common/autotest_common.sh@10 -- # set +x 00:12:01.496 [2024-11-20 15:02:32.043242] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:01.496 [2024-11-20 15:02:32.043366] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.496 [2024-11-20 15:02:32.183620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.496 [2024-11-20 15:02:32.221498] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:01.496 [2024-11-20 15:02:32.221672] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.496 [2024-11-20 15:02:32.221687] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.496 [2024-11-20 15:02:32.221697] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.496 [2024-11-20 15:02:32.221737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.496 15:02:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:01.496 15:02:32 -- common/autotest_common.sh@862 -- # return 0 00:12:01.496 15:02:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:01.496 15:02:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:01.496 15:02:32 -- common/autotest_common.sh@10 -- # set +x 00:12:01.754 15:02:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.754 15:02:32 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:12:01.754 15:02:32 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:12:02.012 true 00:12:02.012 15:02:32 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:02.012 15:02:32 -- target/tls.sh@82 -- # jq -r .tls_version 00:12:02.270 15:02:32 -- target/tls.sh@82 -- # version=0 00:12:02.270 15:02:32 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:12:02.270 15:02:32 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:02.527 15:02:33 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:02.527 15:02:33 -- target/tls.sh@90 -- # jq -r .tls_version 00:12:02.786 15:02:33 -- target/tls.sh@90 -- # version=13 00:12:02.786 15:02:33 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:12:02.786 15:02:33 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:12:03.044 15:02:33 -- target/tls.sh@98 -- # jq -r .tls_version 00:12:03.044 15:02:33 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:03.610 15:02:34 -- target/tls.sh@98 -- # version=7 00:12:03.610 15:02:34 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:12:03.610 15:02:34 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:03.610 15:02:34 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:12:03.869 15:02:34 -- target/tls.sh@105 -- # ktls=false 00:12:03.869 15:02:34 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:12:03.869 15:02:34 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:12:04.128 15:02:34 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:04.128 15:02:34 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:12:04.386 15:02:35 -- target/tls.sh@113 -- # ktls=true 00:12:04.386 15:02:35 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:12:04.386 15:02:35 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:12:04.644 15:02:35 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:04.644 15:02:35 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:12:04.902 15:02:35 -- target/tls.sh@121 -- # ktls=false 00:12:04.902 15:02:35 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:12:04.902 15:02:35 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:12:04.902 15:02:35 -- target/tls.sh@49 -- # local key hash crc 00:12:04.902 15:02:35 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:12:04.902 15:02:35 -- target/tls.sh@51 -- # hash=01 00:12:04.902 15:02:35 -- target/tls.sh@52 -- # gzip -1 -c 00:12:04.902 15:02:35 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:12:04.902 15:02:35 -- target/tls.sh@52 -- # tail -c8 00:12:04.902 15:02:35 -- target/tls.sh@52 -- # head -c 4 00:12:04.902 15:02:35 -- target/tls.sh@52 -- # crc='p$H�' 00:12:04.902 15:02:35 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:12:04.902 15:02:35 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:12:04.902 15:02:35 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:04.902 15:02:35 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:04.902 15:02:35 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:12:04.902 15:02:35 -- target/tls.sh@49 -- # local key hash crc 00:12:04.902 15:02:35 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:12:04.902 15:02:35 -- target/tls.sh@51 -- # hash=01 00:12:04.902 15:02:35 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:12:04.902 15:02:35 -- target/tls.sh@52 -- # gzip -1 -c 00:12:04.902 15:02:35 -- target/tls.sh@52 -- # tail -c8 00:12:04.902 15:02:35 -- target/tls.sh@52 -- # head -c 4 00:12:04.902 15:02:35 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:12:04.902 15:02:35 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:12:04.902 15:02:35 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:12:04.902 15:02:35 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:04.902 15:02:35 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:04.903 15:02:35 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:04.903 15:02:35 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:04.903 15:02:35 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:04.903 15:02:35 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:04.903 15:02:35 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:04.903 15:02:35 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:04.903 15:02:35 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:05.160 15:02:35 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:05.766 15:02:36 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:05.766 15:02:36 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:05.766 15:02:36 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:05.766 [2024-11-20 15:02:36.520219] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:05.766 15:02:36 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:06.332 15:02:36 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:06.332 [2024-11-20 15:02:37.060371] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:06.332 [2024-11-20 15:02:37.060600] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.332 15:02:37 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:06.898 malloc0 00:12:06.898 15:02:37 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:07.157 15:02:37 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:07.415 15:02:38 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:19.648 Initializing NVMe Controllers 00:12:19.648 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:19.648 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:19.648 Initialization complete. Launching workers. 00:12:19.648 ======================================================== 00:12:19.648 Latency(us) 00:12:19.648 Device Information : IOPS MiB/s Average min max 00:12:19.648 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9573.52 37.40 6686.71 1028.88 13364.24 00:12:19.648 ======================================================== 00:12:19.648 Total : 9573.52 37.40 6686.71 1028.88 13364.24 00:12:19.648 00:12:19.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:19.648 15:02:48 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:19.648 15:02:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:19.648 15:02:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:19.648 15:02:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:19.648 15:02:48 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:12:19.648 15:02:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:19.648 15:02:48 -- target/tls.sh@28 -- # bdevperf_pid=76722 00:12:19.648 15:02:48 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:19.648 15:02:48 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:19.648 15:02:48 -- target/tls.sh@31 -- # waitforlisten 76722 /var/tmp/bdevperf.sock 00:12:19.648 15:02:48 -- common/autotest_common.sh@829 -- # '[' -z 76722 ']' 00:12:19.648 15:02:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:19.648 15:02:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:19.648 15:02:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:19.648 15:02:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:19.648 15:02:48 -- common/autotest_common.sh@10 -- # set +x 00:12:19.648 [2024-11-20 15:02:48.358756] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:19.648 [2024-11-20 15:02:48.359155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76722 ] 00:12:19.648 [2024-11-20 15:02:48.515048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.648 [2024-11-20 15:02:48.559610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.648 15:02:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:19.648 15:02:49 -- common/autotest_common.sh@862 -- # return 0 00:12:19.648 15:02:49 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:19.648 [2024-11-20 15:02:49.722637] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:19.648 TLSTESTn1 00:12:19.648 15:02:49 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:19.648 Running I/O for 10 seconds... 00:12:29.619 00:12:29.619 Latency(us) 00:12:29.619 [2024-11-20T15:03:00.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:29.619 [2024-11-20T15:03:00.423Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:29.620 Verification LBA range: start 0x0 length 0x2000 00:12:29.620 TLSTESTn1 : 10.01 5515.32 21.54 0.00 0.00 23169.81 5242.88 31457.28 00:12:29.620 [2024-11-20T15:03:00.424Z] =================================================================================================================== 00:12:29.620 [2024-11-20T15:03:00.424Z] Total : 5515.32 21.54 0.00 0.00 23169.81 5242.88 31457.28 00:12:29.620 0 00:12:29.620 15:02:59 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:29.620 15:02:59 -- target/tls.sh@45 -- # killprocess 76722 00:12:29.620 15:02:59 -- common/autotest_common.sh@936 -- # '[' -z 76722 ']' 00:12:29.620 15:02:59 -- common/autotest_common.sh@940 -- # kill -0 76722 00:12:29.620 15:02:59 -- common/autotest_common.sh@941 -- # uname 00:12:29.620 15:02:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:29.620 15:02:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76722 00:12:29.620 killing process with pid 76722 00:12:29.620 Received shutdown signal, test time was about 10.000000 seconds 00:12:29.620 00:12:29.620 Latency(us) 00:12:29.620 [2024-11-20T15:03:00.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:29.620 [2024-11-20T15:03:00.424Z] =================================================================================================================== 00:12:29.620 [2024-11-20T15:03:00.424Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:29.620 15:02:59 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:29.620 15:02:59 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:29.620 15:02:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76722' 00:12:29.620 15:02:59 -- common/autotest_common.sh@955 -- # kill 76722 00:12:29.620 15:02:59 -- common/autotest_common.sh@960 -- # wait 76722 00:12:29.620 15:03:00 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:29.620 15:03:00 -- common/autotest_common.sh@650 -- # local es=0 00:12:29.620 15:03:00 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:29.620 15:03:00 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:29.620 15:03:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:29.620 15:03:00 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:29.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:29.620 15:03:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:29.620 15:03:00 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:29.620 15:03:00 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:29.620 15:03:00 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:29.620 15:03:00 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:29.620 15:03:00 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:12:29.620 15:03:00 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:29.620 15:03:00 -- target/tls.sh@28 -- # bdevperf_pid=76857 00:12:29.620 15:03:00 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:29.620 15:03:00 -- target/tls.sh@31 -- # waitforlisten 76857 /var/tmp/bdevperf.sock 00:12:29.620 15:03:00 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:29.620 15:03:00 -- common/autotest_common.sh@829 -- # '[' -z 76857 ']' 00:12:29.620 15:03:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:29.620 15:03:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:29.620 15:03:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:29.620 15:03:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:29.620 15:03:00 -- common/autotest_common.sh@10 -- # set +x 00:12:29.620 [2024-11-20 15:03:00.210480] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:29.620 [2024-11-20 15:03:00.210609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76857 ] 00:12:29.620 [2024-11-20 15:03:00.354220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.620 [2024-11-20 15:03:00.389273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:29.877 15:03:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:29.877 15:03:00 -- common/autotest_common.sh@862 -- # return 0 00:12:29.877 15:03:00 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:12:30.136 [2024-11-20 15:03:00.897339] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:30.136 [2024-11-20 15:03:00.902362] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:30.136 [2024-11-20 15:03:00.903037] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7deb80 (107): Transport endpoint is not connected 00:12:30.136 [2024-11-20 15:03:00.904019] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7deb80 (9): Bad file descriptor 00:12:30.136 [2024-11-20 15:03:00.905013] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:30.136 [2024-11-20 15:03:00.905038] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:30.136 [2024-11-20 15:03:00.905050] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:30.136 request: 00:12:30.136 { 00:12:30.136 "name": "TLSTEST", 00:12:30.136 "trtype": "tcp", 00:12:30.136 "traddr": "10.0.0.2", 00:12:30.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:30.136 "adrfam": "ipv4", 00:12:30.136 "trsvcid": "4420", 00:12:30.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:30.136 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt", 00:12:30.136 "method": "bdev_nvme_attach_controller", 00:12:30.136 "req_id": 1 00:12:30.136 } 00:12:30.136 Got JSON-RPC error response 00:12:30.136 response: 00:12:30.136 { 00:12:30.136 "code": -32602, 00:12:30.136 "message": "Invalid parameters" 00:12:30.136 } 00:12:30.136 15:03:00 -- target/tls.sh@36 -- # killprocess 76857 00:12:30.136 15:03:00 -- common/autotest_common.sh@936 -- # '[' -z 76857 ']' 00:12:30.136 15:03:00 -- common/autotest_common.sh@940 -- # kill -0 76857 00:12:30.136 15:03:00 -- common/autotest_common.sh@941 -- # uname 00:12:30.136 15:03:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:30.136 15:03:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76857 00:12:30.394 15:03:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:30.394 15:03:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:30.394 15:03:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76857' 00:12:30.394 killing process with pid 76857 00:12:30.394 15:03:00 -- common/autotest_common.sh@955 -- # kill 76857 00:12:30.394 15:03:00 -- common/autotest_common.sh@960 -- # wait 76857 00:12:30.394 Received shutdown signal, test time was about 10.000000 seconds 00:12:30.394 00:12:30.394 Latency(us) 00:12:30.394 [2024-11-20T15:03:01.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.394 [2024-11-20T15:03:01.198Z] =================================================================================================================== 00:12:30.394 [2024-11-20T15:03:01.198Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:30.394 15:03:01 -- target/tls.sh@37 -- # return 1 00:12:30.394 15:03:01 -- common/autotest_common.sh@653 -- # es=1 00:12:30.394 15:03:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:30.394 15:03:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:30.394 15:03:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:30.394 15:03:01 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:30.394 15:03:01 -- common/autotest_common.sh@650 -- # local es=0 00:12:30.394 15:03:01 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:30.394 15:03:01 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:30.394 15:03:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:30.394 15:03:01 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:30.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:30.394 15:03:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:30.394 15:03:01 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:30.394 15:03:01 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:30.394 15:03:01 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:30.394 15:03:01 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:12:30.394 15:03:01 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:12:30.394 15:03:01 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:30.394 15:03:01 -- target/tls.sh@28 -- # bdevperf_pid=76877 00:12:30.394 15:03:01 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:30.394 15:03:01 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:30.394 15:03:01 -- target/tls.sh@31 -- # waitforlisten 76877 /var/tmp/bdevperf.sock 00:12:30.394 15:03:01 -- common/autotest_common.sh@829 -- # '[' -z 76877 ']' 00:12:30.394 15:03:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:30.394 15:03:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:30.394 15:03:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:30.394 15:03:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:30.394 15:03:01 -- common/autotest_common.sh@10 -- # set +x 00:12:30.394 [2024-11-20 15:03:01.181240] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:30.394 [2024-11-20 15:03:01.181591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76877 ] 00:12:30.653 [2024-11-20 15:03:01.346721] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.653 [2024-11-20 15:03:01.394232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.586 15:03:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:31.586 15:03:02 -- common/autotest_common.sh@862 -- # return 0 00:12:31.586 15:03:02 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:31.845 [2024-11-20 15:03:02.585169] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:31.845 [2024-11-20 15:03:02.591954] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:31.845 [2024-11-20 15:03:02.592228] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:12:31.845 [2024-11-20 15:03:02.592519] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:31.845 [2024-11-20 15:03:02.592943] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6bb80 (107): Transport endpoint is not connected 00:12:31.845 [2024-11-20 15:03:02.593922] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f6bb80 (9): Bad file descriptor 00:12:31.845 [2024-11-20 15:03:02.594916] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:31.845 [2024-11-20 15:03:02.594956] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:31.845 [2024-11-20 15:03:02.594972] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:31.845 request: 00:12:31.845 { 00:12:31.845 "name": "TLSTEST", 00:12:31.845 "trtype": "tcp", 00:12:31.845 "traddr": "10.0.0.2", 00:12:31.845 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:12:31.845 "adrfam": "ipv4", 00:12:31.845 "trsvcid": "4420", 00:12:31.845 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:31.845 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:12:31.845 "method": "bdev_nvme_attach_controller", 00:12:31.845 "req_id": 1 00:12:31.845 } 00:12:31.845 Got JSON-RPC error response 00:12:31.845 response: 00:12:31.845 { 00:12:31.845 "code": -32602, 00:12:31.845 "message": "Invalid parameters" 00:12:31.845 } 00:12:31.845 15:03:02 -- target/tls.sh@36 -- # killprocess 76877 00:12:31.845 15:03:02 -- common/autotest_common.sh@936 -- # '[' -z 76877 ']' 00:12:31.845 15:03:02 -- common/autotest_common.sh@940 -- # kill -0 76877 00:12:31.845 15:03:02 -- common/autotest_common.sh@941 -- # uname 00:12:31.845 15:03:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:31.845 15:03:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76877 00:12:32.104 killing process with pid 76877 00:12:32.104 Received shutdown signal, test time was about 10.000000 seconds 00:12:32.104 00:12:32.104 Latency(us) 00:12:32.104 [2024-11-20T15:03:02.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.104 [2024-11-20T15:03:02.908Z] =================================================================================================================== 00:12:32.104 [2024-11-20T15:03:02.908Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:32.104 15:03:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:32.104 15:03:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:32.104 15:03:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76877' 00:12:32.104 15:03:02 -- common/autotest_common.sh@955 -- # kill 76877 00:12:32.104 15:03:02 -- common/autotest_common.sh@960 -- # wait 76877 00:12:32.104 15:03:02 -- target/tls.sh@37 -- # return 1 00:12:32.104 15:03:02 -- common/autotest_common.sh@653 -- # es=1 00:12:32.104 15:03:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:32.104 15:03:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:32.104 15:03:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:32.104 15:03:02 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:32.104 15:03:02 -- common/autotest_common.sh@650 -- # local es=0 00:12:32.104 15:03:02 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:32.104 15:03:02 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:32.104 15:03:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.104 15:03:02 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:32.104 15:03:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.104 15:03:02 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:32.104 15:03:02 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:32.104 15:03:02 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:12:32.104 15:03:02 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:32.104 15:03:02 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:12:32.104 15:03:02 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:32.104 15:03:02 -- target/tls.sh@28 -- # bdevperf_pid=76905 00:12:32.104 15:03:02 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:32.104 15:03:02 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:32.104 15:03:02 -- target/tls.sh@31 -- # waitforlisten 76905 /var/tmp/bdevperf.sock 00:12:32.104 15:03:02 -- common/autotest_common.sh@829 -- # '[' -z 76905 ']' 00:12:32.104 15:03:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:32.104 15:03:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:32.104 15:03:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:32.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:32.104 15:03:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:32.104 15:03:02 -- common/autotest_common.sh@10 -- # set +x 00:12:32.104 [2024-11-20 15:03:02.850003] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:32.104 [2024-11-20 15:03:02.850392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76905 ] 00:12:32.362 [2024-11-20 15:03:02.993203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.362 [2024-11-20 15:03:03.032905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.362 15:03:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:32.363 15:03:03 -- common/autotest_common.sh@862 -- # return 0 00:12:32.363 15:03:03 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:12:32.620 [2024-11-20 15:03:03.423359] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:32.879 [2024-11-20 15:03:03.430011] tcp.c: 868:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:32.879 [2024-11-20 15:03:03.430225] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:12:32.879 [2024-11-20 15:03:03.430462] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:32.879 [2024-11-20 15:03:03.430890] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19bab80 (107): Transport endpoint is not connected 00:12:32.879 [2024-11-20 15:03:03.431845] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19bab80 (9): Bad file descriptor 00:12:32.879 [2024-11-20 15:03:03.432841] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:12:32.879 [2024-11-20 15:03:03.432887] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:32.879 [2024-11-20 15:03:03.432906] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:12:32.879 request: 00:12:32.879 { 00:12:32.879 "name": "TLSTEST", 00:12:32.879 "trtype": "tcp", 00:12:32.879 "traddr": "10.0.0.2", 00:12:32.879 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:32.879 "adrfam": "ipv4", 00:12:32.879 "trsvcid": "4420", 00:12:32.879 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:12:32.879 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:12:32.879 "method": "bdev_nvme_attach_controller", 00:12:32.879 "req_id": 1 00:12:32.879 } 00:12:32.879 Got JSON-RPC error response 00:12:32.879 response: 00:12:32.879 { 00:12:32.879 "code": -32602, 00:12:32.879 "message": "Invalid parameters" 00:12:32.879 } 00:12:32.879 15:03:03 -- target/tls.sh@36 -- # killprocess 76905 00:12:32.879 15:03:03 -- common/autotest_common.sh@936 -- # '[' -z 76905 ']' 00:12:32.879 15:03:03 -- common/autotest_common.sh@940 -- # kill -0 76905 00:12:32.879 15:03:03 -- common/autotest_common.sh@941 -- # uname 00:12:32.879 15:03:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:32.879 15:03:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76905 00:12:32.879 killing process with pid 76905 00:12:32.879 Received shutdown signal, test time was about 10.000000 seconds 00:12:32.879 00:12:32.879 Latency(us) 00:12:32.879 [2024-11-20T15:03:03.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:32.879 [2024-11-20T15:03:03.683Z] =================================================================================================================== 00:12:32.879 [2024-11-20T15:03:03.683Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:32.879 15:03:03 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:32.879 15:03:03 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:32.879 15:03:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76905' 00:12:32.879 15:03:03 -- common/autotest_common.sh@955 -- # kill 76905 00:12:32.879 15:03:03 -- common/autotest_common.sh@960 -- # wait 76905 00:12:32.879 15:03:03 -- target/tls.sh@37 -- # return 1 00:12:32.879 15:03:03 -- common/autotest_common.sh@653 -- # es=1 00:12:32.879 15:03:03 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:32.879 15:03:03 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:32.879 15:03:03 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:32.879 15:03:03 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:32.879 15:03:03 -- common/autotest_common.sh@650 -- # local es=0 00:12:32.879 15:03:03 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:32.879 15:03:03 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:32.879 15:03:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.880 15:03:03 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:32.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:32.880 15:03:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:32.880 15:03:03 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:12:32.880 15:03:03 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:32.880 15:03:03 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:32.880 15:03:03 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:32.880 15:03:03 -- target/tls.sh@23 -- # psk= 00:12:32.880 15:03:03 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:32.880 15:03:03 -- target/tls.sh@28 -- # bdevperf_pid=76925 00:12:32.880 15:03:03 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:32.880 15:03:03 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:32.880 15:03:03 -- target/tls.sh@31 -- # waitforlisten 76925 /var/tmp/bdevperf.sock 00:12:32.880 15:03:03 -- common/autotest_common.sh@829 -- # '[' -z 76925 ']' 00:12:32.880 15:03:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:32.880 15:03:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:32.880 15:03:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:32.880 15:03:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:32.880 15:03:03 -- common/autotest_common.sh@10 -- # set +x 00:12:32.880 [2024-11-20 15:03:03.671210] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:32.880 [2024-11-20 15:03:03.671496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76925 ] 00:12:33.138 [2024-11-20 15:03:03.805411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.138 [2024-11-20 15:03:03.841202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.133 15:03:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:34.133 15:03:04 -- common/autotest_common.sh@862 -- # return 0 00:12:34.133 15:03:04 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:12:34.133 [2024-11-20 15:03:04.920327] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:12:34.133 [2024-11-20 15:03:04.921856] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdfa450 (9): Bad file descriptor 00:12:34.133 [2024-11-20 15:03:04.922850] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:12:34.133 [2024-11-20 15:03:04.923286] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:12:34.133 [2024-11-20 15:03:04.923513] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:34.133 request: 00:12:34.133 { 00:12:34.133 "name": "TLSTEST", 00:12:34.133 "trtype": "tcp", 00:12:34.133 "traddr": "10.0.0.2", 00:12:34.133 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:34.133 "adrfam": "ipv4", 00:12:34.133 "trsvcid": "4420", 00:12:34.133 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:34.133 "method": "bdev_nvme_attach_controller", 00:12:34.133 "req_id": 1 00:12:34.133 } 00:12:34.133 Got JSON-RPC error response 00:12:34.133 response: 00:12:34.133 { 00:12:34.133 "code": -32602, 00:12:34.133 "message": "Invalid parameters" 00:12:34.133 } 00:12:34.393 15:03:04 -- target/tls.sh@36 -- # killprocess 76925 00:12:34.393 15:03:04 -- common/autotest_common.sh@936 -- # '[' -z 76925 ']' 00:12:34.393 15:03:04 -- common/autotest_common.sh@940 -- # kill -0 76925 00:12:34.393 15:03:04 -- common/autotest_common.sh@941 -- # uname 00:12:34.393 15:03:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:34.393 15:03:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76925 00:12:34.393 killing process with pid 76925 00:12:34.393 Received shutdown signal, test time was about 10.000000 seconds 00:12:34.393 00:12:34.393 Latency(us) 00:12:34.393 [2024-11-20T15:03:05.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.393 [2024-11-20T15:03:05.197Z] =================================================================================================================== 00:12:34.393 [2024-11-20T15:03:05.197Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:34.393 15:03:04 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:34.393 15:03:04 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:34.393 15:03:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76925' 00:12:34.393 15:03:04 -- common/autotest_common.sh@955 -- # kill 76925 00:12:34.393 15:03:04 -- common/autotest_common.sh@960 -- # wait 76925 00:12:34.393 15:03:05 -- target/tls.sh@37 -- # return 1 00:12:34.393 15:03:05 -- common/autotest_common.sh@653 -- # es=1 00:12:34.393 15:03:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:34.393 15:03:05 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:34.393 15:03:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:34.393 15:03:05 -- target/tls.sh@167 -- # killprocess 76474 00:12:34.393 15:03:05 -- common/autotest_common.sh@936 -- # '[' -z 76474 ']' 00:12:34.393 15:03:05 -- common/autotest_common.sh@940 -- # kill -0 76474 00:12:34.393 15:03:05 -- common/autotest_common.sh@941 -- # uname 00:12:34.393 15:03:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:34.393 15:03:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76474 00:12:34.393 killing process with pid 76474 00:12:34.393 15:03:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:34.393 15:03:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:34.393 15:03:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76474' 00:12:34.393 15:03:05 -- common/autotest_common.sh@955 -- # kill 76474 00:12:34.393 15:03:05 -- common/autotest_common.sh@960 -- # wait 76474 00:12:34.651 15:03:05 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:12:34.651 15:03:05 -- target/tls.sh@49 -- # local key hash crc 00:12:34.651 15:03:05 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:12:34.651 15:03:05 -- target/tls.sh@51 -- # hash=02 00:12:34.651 15:03:05 -- target/tls.sh@52 -- # gzip -1 -c 00:12:34.651 15:03:05 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:12:34.651 15:03:05 -- target/tls.sh@52 -- # head -c 4 00:12:34.651 15:03:05 -- target/tls.sh@52 -- # tail -c8 00:12:34.651 15:03:05 -- target/tls.sh@52 -- # crc='�e�'\''' 00:12:34.651 15:03:05 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:12:34.651 15:03:05 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:12:34.651 15:03:05 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:34.651 15:03:05 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:34.651 15:03:05 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:34.651 15:03:05 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:12:34.651 15:03:05 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:34.651 15:03:05 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:12:34.651 15:03:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:34.651 15:03:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:34.651 15:03:05 -- common/autotest_common.sh@10 -- # set +x 00:12:34.651 15:03:05 -- nvmf/common.sh@469 -- # nvmfpid=76963 00:12:34.651 15:03:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:34.651 15:03:05 -- nvmf/common.sh@470 -- # waitforlisten 76963 00:12:34.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.651 15:03:05 -- common/autotest_common.sh@829 -- # '[' -z 76963 ']' 00:12:34.651 15:03:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.651 15:03:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:34.651 15:03:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.651 15:03:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:34.651 15:03:05 -- common/autotest_common.sh@10 -- # set +x 00:12:34.651 [2024-11-20 15:03:05.364455] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:34.651 [2024-11-20 15:03:05.364560] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.909 [2024-11-20 15:03:05.503264] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.909 [2024-11-20 15:03:05.544947] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:34.909 [2024-11-20 15:03:05.545275] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.909 [2024-11-20 15:03:05.545406] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.909 [2024-11-20 15:03:05.545561] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.910 [2024-11-20 15:03:05.545736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.844 15:03:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:35.844 15:03:06 -- common/autotest_common.sh@862 -- # return 0 00:12:35.844 15:03:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:35.844 15:03:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:35.844 15:03:06 -- common/autotest_common.sh@10 -- # set +x 00:12:35.844 15:03:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.844 15:03:06 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:35.844 15:03:06 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:35.844 15:03:06 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:36.102 [2024-11-20 15:03:06.653712] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:36.102 15:03:06 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:36.361 15:03:06 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:36.361 [2024-11-20 15:03:07.137824] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:36.361 [2024-11-20 15:03:07.138064] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.361 15:03:07 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:36.619 malloc0 00:12:36.619 15:03:07 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:37.185 15:03:07 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:37.444 15:03:07 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:37.444 15:03:07 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:37.444 15:03:07 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:37.444 15:03:07 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:37.444 15:03:07 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:12:37.444 15:03:07 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:37.444 15:03:07 -- target/tls.sh@28 -- # bdevperf_pid=77023 00:12:37.444 15:03:07 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:37.444 15:03:07 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:37.444 15:03:07 -- target/tls.sh@31 -- # waitforlisten 77023 /var/tmp/bdevperf.sock 00:12:37.444 15:03:07 -- common/autotest_common.sh@829 -- # '[' -z 77023 ']' 00:12:37.444 15:03:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:37.444 15:03:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:37.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:37.444 15:03:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:37.444 15:03:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:37.444 15:03:07 -- common/autotest_common.sh@10 -- # set +x 00:12:37.444 [2024-11-20 15:03:08.044435] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:37.444 [2024-11-20 15:03:08.044816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77023 ] 00:12:37.444 [2024-11-20 15:03:08.183906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.444 [2024-11-20 15:03:08.219415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.379 15:03:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:38.379 15:03:09 -- common/autotest_common.sh@862 -- # return 0 00:12:38.379 15:03:09 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:38.638 [2024-11-20 15:03:09.298599] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:38.638 TLSTESTn1 00:12:38.638 15:03:09 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:38.896 Running I/O for 10 seconds... 00:12:48.956 00:12:48.956 Latency(us) 00:12:48.956 [2024-11-20T15:03:19.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:48.956 [2024-11-20T15:03:19.760Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:48.956 Verification LBA range: start 0x0 length 0x2000 00:12:48.956 TLSTESTn1 : 10.01 5263.83 20.56 0.00 0.00 24276.78 5362.04 39083.29 00:12:48.956 [2024-11-20T15:03:19.760Z] =================================================================================================================== 00:12:48.956 [2024-11-20T15:03:19.760Z] Total : 5263.83 20.56 0.00 0.00 24276.78 5362.04 39083.29 00:12:48.956 0 00:12:48.956 15:03:19 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:48.956 15:03:19 -- target/tls.sh@45 -- # killprocess 77023 00:12:48.956 15:03:19 -- common/autotest_common.sh@936 -- # '[' -z 77023 ']' 00:12:48.956 15:03:19 -- common/autotest_common.sh@940 -- # kill -0 77023 00:12:48.956 15:03:19 -- common/autotest_common.sh@941 -- # uname 00:12:48.956 15:03:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:48.956 15:03:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77023 00:12:48.956 killing process with pid 77023 00:12:48.956 Received shutdown signal, test time was about 10.000000 seconds 00:12:48.956 00:12:48.956 Latency(us) 00:12:48.956 [2024-11-20T15:03:19.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:48.956 [2024-11-20T15:03:19.760Z] =================================================================================================================== 00:12:48.956 [2024-11-20T15:03:19.760Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:48.956 15:03:19 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:48.956 15:03:19 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:48.956 15:03:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77023' 00:12:48.956 15:03:19 -- common/autotest_common.sh@955 -- # kill 77023 00:12:48.956 15:03:19 -- common/autotest_common.sh@960 -- # wait 77023 00:12:48.956 15:03:19 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:48.956 15:03:19 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:48.956 15:03:19 -- common/autotest_common.sh@650 -- # local es=0 00:12:48.956 15:03:19 -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:48.956 15:03:19 -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:12:48.956 15:03:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.956 15:03:19 -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:12:48.956 15:03:19 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.956 15:03:19 -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:48.956 15:03:19 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:48.956 15:03:19 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:48.956 15:03:19 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:48.956 15:03:19 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:12:48.956 15:03:19 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:48.956 15:03:19 -- target/tls.sh@28 -- # bdevperf_pid=77158 00:12:48.956 15:03:19 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:48.956 15:03:19 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:48.956 15:03:19 -- target/tls.sh@31 -- # waitforlisten 77158 /var/tmp/bdevperf.sock 00:12:48.956 15:03:19 -- common/autotest_common.sh@829 -- # '[' -z 77158 ']' 00:12:48.956 15:03:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:48.956 15:03:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:48.956 15:03:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:48.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:48.956 15:03:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:48.956 15:03:19 -- common/autotest_common.sh@10 -- # set +x 00:12:49.215 [2024-11-20 15:03:19.792513] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:49.215 [2024-11-20 15:03:19.793514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77158 ] 00:12:49.215 [2024-11-20 15:03:19.936881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.215 [2024-11-20 15:03:19.972692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:50.590 15:03:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:50.590 15:03:20 -- common/autotest_common.sh@862 -- # return 0 00:12:50.590 15:03:20 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:50.590 [2024-11-20 15:03:21.286983] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:50.590 [2024-11-20 15:03:21.287675] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:12:50.590 request: 00:12:50.590 { 00:12:50.590 "name": "TLSTEST", 00:12:50.590 "trtype": "tcp", 00:12:50.590 "traddr": "10.0.0.2", 00:12:50.590 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:50.590 "adrfam": "ipv4", 00:12:50.590 "trsvcid": "4420", 00:12:50.590 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:50.590 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:50.590 "method": "bdev_nvme_attach_controller", 00:12:50.590 "req_id": 1 00:12:50.590 } 00:12:50.590 Got JSON-RPC error response 00:12:50.590 response: 00:12:50.590 { 00:12:50.590 "code": -22, 00:12:50.590 "message": "Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:12:50.590 } 00:12:50.590 15:03:21 -- target/tls.sh@36 -- # killprocess 77158 00:12:50.590 15:03:21 -- common/autotest_common.sh@936 -- # '[' -z 77158 ']' 00:12:50.590 15:03:21 -- common/autotest_common.sh@940 -- # kill -0 77158 00:12:50.590 15:03:21 -- common/autotest_common.sh@941 -- # uname 00:12:50.590 15:03:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:50.590 15:03:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77158 00:12:50.590 killing process with pid 77158 00:12:50.590 Received shutdown signal, test time was about 10.000000 seconds 00:12:50.590 00:12:50.590 Latency(us) 00:12:50.590 [2024-11-20T15:03:21.394Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.590 [2024-11-20T15:03:21.394Z] =================================================================================================================== 00:12:50.590 [2024-11-20T15:03:21.394Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:50.590 15:03:21 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:50.590 15:03:21 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:50.590 15:03:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77158' 00:12:50.590 15:03:21 -- common/autotest_common.sh@955 -- # kill 77158 00:12:50.590 15:03:21 -- common/autotest_common.sh@960 -- # wait 77158 00:12:50.849 15:03:21 -- target/tls.sh@37 -- # return 1 00:12:50.849 15:03:21 -- common/autotest_common.sh@653 -- # es=1 00:12:50.849 15:03:21 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:50.849 15:03:21 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:50.849 15:03:21 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:50.849 15:03:21 -- target/tls.sh@183 -- # killprocess 76963 00:12:50.849 15:03:21 -- common/autotest_common.sh@936 -- # '[' -z 76963 ']' 00:12:50.849 15:03:21 -- common/autotest_common.sh@940 -- # kill -0 76963 00:12:50.849 15:03:21 -- common/autotest_common.sh@941 -- # uname 00:12:50.849 15:03:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:50.849 15:03:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76963 00:12:50.849 killing process with pid 76963 00:12:50.849 15:03:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:50.849 15:03:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:50.849 15:03:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76963' 00:12:50.849 15:03:21 -- common/autotest_common.sh@955 -- # kill 76963 00:12:50.849 15:03:21 -- common/autotest_common.sh@960 -- # wait 76963 00:12:51.108 15:03:21 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:12:51.108 15:03:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:51.108 15:03:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:51.108 15:03:21 -- common/autotest_common.sh@10 -- # set +x 00:12:51.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.108 15:03:21 -- nvmf/common.sh@469 -- # nvmfpid=77190 00:12:51.108 15:03:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:51.108 15:03:21 -- nvmf/common.sh@470 -- # waitforlisten 77190 00:12:51.108 15:03:21 -- common/autotest_common.sh@829 -- # '[' -z 77190 ']' 00:12:51.108 15:03:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.108 15:03:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:51.108 15:03:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.108 15:03:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:51.108 15:03:21 -- common/autotest_common.sh@10 -- # set +x 00:12:51.108 [2024-11-20 15:03:21.706720] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:51.108 [2024-11-20 15:03:21.706821] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.108 [2024-11-20 15:03:21.840570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.108 [2024-11-20 15:03:21.874835] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:51.108 [2024-11-20 15:03:21.874981] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.108 [2024-11-20 15:03:21.874995] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.108 [2024-11-20 15:03:21.875005] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.108 [2024-11-20 15:03:21.875037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.044 15:03:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:52.044 15:03:22 -- common/autotest_common.sh@862 -- # return 0 00:12:52.044 15:03:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:52.044 15:03:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:52.044 15:03:22 -- common/autotest_common.sh@10 -- # set +x 00:12:52.044 15:03:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.044 15:03:22 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:52.044 15:03:22 -- common/autotest_common.sh@650 -- # local es=0 00:12:52.044 15:03:22 -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:52.044 15:03:22 -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:12:52.044 15:03:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.044 15:03:22 -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:12:52.044 15:03:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:52.044 15:03:22 -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:52.044 15:03:22 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:52.044 15:03:22 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:52.300 [2024-11-20 15:03:22.991702] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:52.300 15:03:23 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:52.865 15:03:23 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:53.123 [2024-11-20 15:03:23.687879] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:53.123 [2024-11-20 15:03:23.688123] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.123 15:03:23 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:53.380 malloc0 00:12:53.380 15:03:23 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:53.638 15:03:24 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:53.897 [2024-11-20 15:03:24.490809] tcp.c:3551:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:12:53.897 [2024-11-20 15:03:24.491056] tcp.c:3620:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:12:53.897 [2024-11-20 15:03:24.491086] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:12:53.897 request: 00:12:53.897 { 00:12:53.897 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:53.897 "host": "nqn.2016-06.io.spdk:host1", 00:12:53.897 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:53.897 "method": "nvmf_subsystem_add_host", 00:12:53.897 "req_id": 1 00:12:53.897 } 00:12:53.897 Got JSON-RPC error response 00:12:53.897 response: 00:12:53.897 { 00:12:53.897 "code": -32603, 00:12:53.897 "message": "Internal error" 00:12:53.897 } 00:12:53.897 15:03:24 -- common/autotest_common.sh@653 -- # es=1 00:12:53.897 15:03:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:53.897 15:03:24 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:53.897 15:03:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:53.897 15:03:24 -- target/tls.sh@189 -- # killprocess 77190 00:12:53.897 15:03:24 -- common/autotest_common.sh@936 -- # '[' -z 77190 ']' 00:12:53.897 15:03:24 -- common/autotest_common.sh@940 -- # kill -0 77190 00:12:53.897 15:03:24 -- common/autotest_common.sh@941 -- # uname 00:12:53.897 15:03:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:53.897 15:03:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77190 00:12:53.897 killing process with pid 77190 00:12:53.897 15:03:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:53.897 15:03:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:53.897 15:03:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77190' 00:12:53.897 15:03:24 -- common/autotest_common.sh@955 -- # kill 77190 00:12:53.897 15:03:24 -- common/autotest_common.sh@960 -- # wait 77190 00:12:53.897 15:03:24 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:53.897 15:03:24 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:12:53.897 15:03:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:53.897 15:03:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:53.897 15:03:24 -- common/autotest_common.sh@10 -- # set +x 00:12:54.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.155 15:03:24 -- nvmf/common.sh@469 -- # nvmfpid=77258 00:12:54.155 15:03:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:54.155 15:03:24 -- nvmf/common.sh@470 -- # waitforlisten 77258 00:12:54.155 15:03:24 -- common/autotest_common.sh@829 -- # '[' -z 77258 ']' 00:12:54.155 15:03:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.155 15:03:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:54.155 15:03:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.155 15:03:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:54.155 15:03:24 -- common/autotest_common.sh@10 -- # set +x 00:12:54.155 [2024-11-20 15:03:24.758545] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:54.155 [2024-11-20 15:03:24.758701] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.155 [2024-11-20 15:03:24.907954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.155 [2024-11-20 15:03:24.952335] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:54.155 [2024-11-20 15:03:24.952590] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.155 [2024-11-20 15:03:24.952615] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.155 [2024-11-20 15:03:24.952631] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.155 [2024-11-20 15:03:24.952705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.088 15:03:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:55.088 15:03:25 -- common/autotest_common.sh@862 -- # return 0 00:12:55.088 15:03:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:55.088 15:03:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:55.088 15:03:25 -- common/autotest_common.sh@10 -- # set +x 00:12:55.088 15:03:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.088 15:03:25 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:55.088 15:03:25 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:55.088 15:03:25 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:55.346 [2024-11-20 15:03:26.115364] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.346 15:03:26 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:55.983 15:03:26 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:55.983 [2024-11-20 15:03:26.695515] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:55.983 [2024-11-20 15:03:26.695785] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.983 15:03:26 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:56.258 malloc0 00:12:56.258 15:03:26 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:56.516 15:03:27 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:56.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:56.775 15:03:27 -- target/tls.sh@197 -- # bdevperf_pid=77313 00:12:56.775 15:03:27 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:56.775 15:03:27 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:56.775 15:03:27 -- target/tls.sh@200 -- # waitforlisten 77313 /var/tmp/bdevperf.sock 00:12:56.775 15:03:27 -- common/autotest_common.sh@829 -- # '[' -z 77313 ']' 00:12:56.775 15:03:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:56.775 15:03:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:56.775 15:03:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:56.775 15:03:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:56.775 15:03:27 -- common/autotest_common.sh@10 -- # set +x 00:12:56.775 [2024-11-20 15:03:27.527200] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:56.775 [2024-11-20 15:03:27.527322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77313 ] 00:12:57.033 [2024-11-20 15:03:27.673657] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.033 [2024-11-20 15:03:27.717501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.033 15:03:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:57.033 15:03:27 -- common/autotest_common.sh@862 -- # return 0 00:12:57.033 15:03:27 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:57.599 [2024-11-20 15:03:28.121736] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:57.599 TLSTESTn1 00:12:57.599 15:03:28 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:57.857 15:03:28 -- target/tls.sh@205 -- # tgtconf='{ 00:12:57.857 "subsystems": [ 00:12:57.857 { 00:12:57.857 "subsystem": "iobuf", 00:12:57.857 "config": [ 00:12:57.857 { 00:12:57.857 "method": "iobuf_set_options", 00:12:57.857 "params": { 00:12:57.857 "small_pool_count": 8192, 00:12:57.857 "large_pool_count": 1024, 00:12:57.857 "small_bufsize": 8192, 00:12:57.857 "large_bufsize": 135168 00:12:57.857 } 00:12:57.857 } 00:12:57.857 ] 00:12:57.857 }, 00:12:57.857 { 00:12:57.857 "subsystem": "sock", 00:12:57.857 "config": [ 00:12:57.857 { 00:12:57.857 "method": "sock_impl_set_options", 00:12:57.857 "params": { 00:12:57.857 "impl_name": "uring", 00:12:57.857 "recv_buf_size": 2097152, 00:12:57.857 "send_buf_size": 2097152, 00:12:57.857 "enable_recv_pipe": true, 00:12:57.857 "enable_quickack": false, 00:12:57.857 "enable_placement_id": 0, 00:12:57.857 "enable_zerocopy_send_server": false, 00:12:57.857 "enable_zerocopy_send_client": false, 00:12:57.857 "zerocopy_threshold": 0, 00:12:57.857 "tls_version": 0, 00:12:57.857 "enable_ktls": false 00:12:57.857 } 00:12:57.857 }, 00:12:57.857 { 00:12:57.857 "method": "sock_impl_set_options", 00:12:57.857 "params": { 00:12:57.857 "impl_name": "posix", 00:12:57.857 "recv_buf_size": 2097152, 00:12:57.857 "send_buf_size": 2097152, 00:12:57.857 "enable_recv_pipe": true, 00:12:57.857 "enable_quickack": false, 00:12:57.857 "enable_placement_id": 0, 00:12:57.857 "enable_zerocopy_send_server": true, 00:12:57.857 "enable_zerocopy_send_client": false, 00:12:57.857 "zerocopy_threshold": 0, 00:12:57.857 "tls_version": 0, 00:12:57.857 "enable_ktls": false 00:12:57.857 } 00:12:57.857 }, 00:12:57.857 { 00:12:57.858 "method": "sock_impl_set_options", 00:12:57.858 "params": { 00:12:57.858 "impl_name": "ssl", 00:12:57.858 "recv_buf_size": 4096, 00:12:57.858 "send_buf_size": 4096, 00:12:57.858 "enable_recv_pipe": true, 00:12:57.858 "enable_quickack": false, 00:12:57.858 "enable_placement_id": 0, 00:12:57.858 "enable_zerocopy_send_server": true, 00:12:57.858 "enable_zerocopy_send_client": false, 00:12:57.858 "zerocopy_threshold": 0, 00:12:57.858 "tls_version": 0, 00:12:57.858 "enable_ktls": false 00:12:57.858 } 00:12:57.858 } 00:12:57.858 ] 00:12:57.858 }, 00:12:57.858 { 00:12:57.858 "subsystem": "vmd", 00:12:57.858 "config": [] 00:12:57.858 }, 00:12:57.858 { 00:12:57.858 "subsystem": "accel", 00:12:57.858 "config": [ 00:12:57.858 { 00:12:57.858 "method": "accel_set_options", 00:12:57.858 "params": { 00:12:57.858 "small_cache_size": 128, 00:12:57.858 "large_cache_size": 16, 00:12:57.858 "task_count": 2048, 00:12:57.858 "sequence_count": 2048, 00:12:57.858 "buf_count": 2048 00:12:57.858 } 00:12:57.858 } 00:12:57.858 ] 00:12:57.858 }, 00:12:57.858 { 00:12:57.858 "subsystem": "bdev", 00:12:57.858 "config": [ 00:12:57.858 { 00:12:57.858 "method": "bdev_set_options", 00:12:57.858 "params": { 00:12:57.858 "bdev_io_pool_size": 65535, 00:12:57.858 "bdev_io_cache_size": 256, 00:12:57.858 "bdev_auto_examine": true, 00:12:57.858 "iobuf_small_cache_size": 128, 00:12:57.858 "iobuf_large_cache_size": 16 00:12:57.858 } 00:12:57.858 }, 00:12:57.858 { 00:12:57.858 "method": "bdev_raid_set_options", 00:12:57.858 "params": { 00:12:57.858 "process_window_size_kb": 1024 00:12:57.858 } 00:12:57.858 }, 00:12:57.858 { 00:12:57.858 "method": "bdev_iscsi_set_options", 00:12:57.858 "params": { 00:12:57.858 "timeout_sec": 30 00:12:57.858 } 00:12:57.858 }, 00:12:57.858 { 00:12:57.858 "method": "bdev_nvme_set_options", 00:12:57.858 "params": { 00:12:57.858 "action_on_timeout": "none", 00:12:57.858 "timeout_us": 0, 00:12:57.858 "timeout_admin_us": 0, 00:12:57.858 "keep_alive_timeout_ms": 10000, 00:12:57.858 "transport_retry_count": 4, 00:12:57.858 "arbitration_burst": 0, 00:12:57.858 "low_priority_weight": 0, 00:12:57.858 "medium_priority_weight": 0, 00:12:57.858 "high_priority_weight": 0, 00:12:57.858 "nvme_adminq_poll_period_us": 10000, 00:12:57.858 "nvme_ioq_poll_period_us": 0, 00:12:57.858 "io_queue_requests": 0, 00:12:57.858 "delay_cmd_submit": true, 00:12:57.858 "bdev_retry_count": 3, 00:12:57.858 "transport_ack_timeout": 0, 00:12:57.858 "ctrlr_loss_timeout_sec": 0, 00:12:57.858 "reconnect_delay_sec": 0, 00:12:57.858 "fast_io_fail_timeout_sec": 0, 00:12:57.858 "generate_uuids": false, 00:12:57.858 "transport_tos": 0, 00:12:57.858 "io_path_stat": false, 00:12:57.858 "allow_accel_sequence": false 00:12:57.858 } 00:12:57.858 }, 00:12:57.858 { 00:12:57.858 "method": "bdev_nvme_set_hotplug", 00:12:57.858 "params": { 00:12:57.858 "period_us": 100000, 00:12:57.858 "enable": false 00:12:57.858 } 00:12:57.858 }, 00:12:57.858 { 00:12:57.858 "method": "bdev_malloc_create", 00:12:57.858 "params": { 00:12:57.858 "name": "malloc0", 00:12:57.858 "num_blocks": 8192, 00:12:57.858 "block_size": 4096, 00:12:57.858 "physical_block_size": 4096, 00:12:57.858 "uuid": "ea189378-842a-42b2-bc3d-1430b0447de6", 00:12:57.858 "optimal_io_boundary": 0 00:12:57.858 } 00:12:57.858 }, 00:12:57.858 { 00:12:57.858 "method": "bdev_wait_for_examine" 00:12:57.858 } 00:12:57.858 ] 00:12:57.858 }, 00:12:57.858 { 00:12:57.858 "subsystem": "nbd", 00:12:57.858 "config": [] 00:12:57.858 }, 00:12:57.858 { 00:12:57.858 "subsystem": "scheduler", 00:12:57.858 "config": [ 00:12:57.858 { 00:12:57.858 "method": "framework_set_scheduler", 00:12:57.858 "params": { 00:12:57.858 "name": "static" 00:12:57.858 } 00:12:57.858 } 00:12:57.858 ] 00:12:57.858 }, 00:12:57.858 { 00:12:57.858 "subsystem": "nvmf", 00:12:57.858 "config": [ 00:12:57.858 { 00:12:57.858 "method": "nvmf_set_config", 00:12:57.858 "params": { 00:12:57.858 "discovery_filter": "match_any", 00:12:57.858 "admin_cmd_passthru": { 00:12:57.858 "identify_ctrlr": false 00:12:57.858 } 00:12:57.858 } 00:12:57.858 }, 00:12:57.858 { 00:12:57.858 "method": "nvmf_set_max_subsystems", 00:12:57.858 "params": { 00:12:57.858 "max_subsystems": 1024 00:12:57.858 } 00:12:57.858 }, 00:12:57.858 { 00:12:57.858 "method": "nvmf_set_crdt", 00:12:57.858 "params": { 00:12:57.858 "crdt1": 0, 00:12:57.858 "crdt2": 0, 00:12:57.858 "crdt3": 0 00:12:57.858 } 00:12:57.858 }, 00:12:57.858 { 00:12:57.858 "method": "nvmf_create_transport", 00:12:57.858 "params": { 00:12:57.858 "trtype": "TCP", 00:12:57.858 "max_queue_depth": 128, 00:12:57.858 "max_io_qpairs_per_ctrlr": 127, 00:12:57.858 "in_capsule_data_size": 4096, 00:12:57.858 "max_io_size": 131072, 00:12:57.858 "io_unit_size": 131072, 00:12:57.858 "max_aq_depth": 128, 00:12:57.858 "num_shared_buffers": 511, 00:12:57.858 "buf_cache_size": 4294967295, 00:12:57.858 "dif_insert_or_strip": false, 00:12:57.858 "zcopy": false, 00:12:57.858 "c2h_success": false, 00:12:57.858 "sock_priority": 0, 00:12:57.858 "abort_timeout_sec": 1 00:12:57.858 } 00:12:57.858 }, 00:12:57.858 { 00:12:57.858 "method": "nvmf_create_subsystem", 00:12:57.858 "params": { 00:12:57.858 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:57.858 "allow_any_host": false, 00:12:57.858 "serial_number": "SPDK00000000000001", 00:12:57.858 "model_number": "SPDK bdev Controller", 00:12:57.858 "max_namespaces": 10, 00:12:57.858 "min_cntlid": 1, 00:12:57.858 "max_cntlid": 65519, 00:12:57.858 "ana_reporting": false 00:12:57.858 } 00:12:57.858 }, 00:12:57.858 { 00:12:57.858 "method": "nvmf_subsystem_add_host", 00:12:57.858 "params": { 00:12:57.858 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:57.858 "host": "nqn.2016-06.io.spdk:host1", 00:12:57.858 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:12:57.858 } 00:12:57.858 }, 00:12:57.858 { 00:12:57.858 "method": "nvmf_subsystem_add_ns", 00:12:57.858 "params": { 00:12:57.858 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:57.858 "namespace": { 00:12:57.858 "nsid": 1, 00:12:57.858 "bdev_name": "malloc0", 00:12:57.858 "nguid": "EA189378842A42B2BC3D1430B0447DE6", 00:12:57.858 "uuid": "ea189378-842a-42b2-bc3d-1430b0447de6" 00:12:57.858 } 00:12:57.858 } 00:12:57.858 }, 00:12:57.858 { 00:12:57.858 "method": "nvmf_subsystem_add_listener", 00:12:57.858 "params": { 00:12:57.858 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:57.858 "listen_address": { 00:12:57.858 "trtype": "TCP", 00:12:57.858 "adrfam": "IPv4", 00:12:57.858 "traddr": "10.0.0.2", 00:12:57.858 "trsvcid": "4420" 00:12:57.858 }, 00:12:57.858 "secure_channel": true 00:12:57.858 } 00:12:57.858 } 00:12:57.858 ] 00:12:57.858 } 00:12:57.858 ] 00:12:57.858 }' 00:12:57.858 15:03:28 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:12:58.425 15:03:28 -- target/tls.sh@206 -- # bdevperfconf='{ 00:12:58.425 "subsystems": [ 00:12:58.425 { 00:12:58.425 "subsystem": "iobuf", 00:12:58.425 "config": [ 00:12:58.425 { 00:12:58.425 "method": "iobuf_set_options", 00:12:58.425 "params": { 00:12:58.425 "small_pool_count": 8192, 00:12:58.425 "large_pool_count": 1024, 00:12:58.425 "small_bufsize": 8192, 00:12:58.425 "large_bufsize": 135168 00:12:58.425 } 00:12:58.425 } 00:12:58.425 ] 00:12:58.425 }, 00:12:58.425 { 00:12:58.425 "subsystem": "sock", 00:12:58.425 "config": [ 00:12:58.425 { 00:12:58.425 "method": "sock_impl_set_options", 00:12:58.425 "params": { 00:12:58.425 "impl_name": "uring", 00:12:58.425 "recv_buf_size": 2097152, 00:12:58.425 "send_buf_size": 2097152, 00:12:58.425 "enable_recv_pipe": true, 00:12:58.425 "enable_quickack": false, 00:12:58.425 "enable_placement_id": 0, 00:12:58.425 "enable_zerocopy_send_server": false, 00:12:58.425 "enable_zerocopy_send_client": false, 00:12:58.425 "zerocopy_threshold": 0, 00:12:58.425 "tls_version": 0, 00:12:58.425 "enable_ktls": false 00:12:58.425 } 00:12:58.425 }, 00:12:58.425 { 00:12:58.425 "method": "sock_impl_set_options", 00:12:58.425 "params": { 00:12:58.425 "impl_name": "posix", 00:12:58.425 "recv_buf_size": 2097152, 00:12:58.425 "send_buf_size": 2097152, 00:12:58.425 "enable_recv_pipe": true, 00:12:58.425 "enable_quickack": false, 00:12:58.425 "enable_placement_id": 0, 00:12:58.425 "enable_zerocopy_send_server": true, 00:12:58.425 "enable_zerocopy_send_client": false, 00:12:58.425 "zerocopy_threshold": 0, 00:12:58.425 "tls_version": 0, 00:12:58.425 "enable_ktls": false 00:12:58.425 } 00:12:58.425 }, 00:12:58.425 { 00:12:58.425 "method": "sock_impl_set_options", 00:12:58.425 "params": { 00:12:58.425 "impl_name": "ssl", 00:12:58.425 "recv_buf_size": 4096, 00:12:58.425 "send_buf_size": 4096, 00:12:58.425 "enable_recv_pipe": true, 00:12:58.425 "enable_quickack": false, 00:12:58.425 "enable_placement_id": 0, 00:12:58.425 "enable_zerocopy_send_server": true, 00:12:58.425 "enable_zerocopy_send_client": false, 00:12:58.425 "zerocopy_threshold": 0, 00:12:58.425 "tls_version": 0, 00:12:58.425 "enable_ktls": false 00:12:58.425 } 00:12:58.425 } 00:12:58.425 ] 00:12:58.425 }, 00:12:58.425 { 00:12:58.425 "subsystem": "vmd", 00:12:58.425 "config": [] 00:12:58.425 }, 00:12:58.425 { 00:12:58.425 "subsystem": "accel", 00:12:58.425 "config": [ 00:12:58.425 { 00:12:58.425 "method": "accel_set_options", 00:12:58.425 "params": { 00:12:58.425 "small_cache_size": 128, 00:12:58.425 "large_cache_size": 16, 00:12:58.425 "task_count": 2048, 00:12:58.425 "sequence_count": 2048, 00:12:58.425 "buf_count": 2048 00:12:58.425 } 00:12:58.425 } 00:12:58.425 ] 00:12:58.425 }, 00:12:58.425 { 00:12:58.425 "subsystem": "bdev", 00:12:58.425 "config": [ 00:12:58.425 { 00:12:58.425 "method": "bdev_set_options", 00:12:58.425 "params": { 00:12:58.425 "bdev_io_pool_size": 65535, 00:12:58.425 "bdev_io_cache_size": 256, 00:12:58.425 "bdev_auto_examine": true, 00:12:58.425 "iobuf_small_cache_size": 128, 00:12:58.425 "iobuf_large_cache_size": 16 00:12:58.425 } 00:12:58.425 }, 00:12:58.425 { 00:12:58.425 "method": "bdev_raid_set_options", 00:12:58.425 "params": { 00:12:58.425 "process_window_size_kb": 1024 00:12:58.425 } 00:12:58.425 }, 00:12:58.425 { 00:12:58.425 "method": "bdev_iscsi_set_options", 00:12:58.425 "params": { 00:12:58.425 "timeout_sec": 30 00:12:58.425 } 00:12:58.425 }, 00:12:58.425 { 00:12:58.425 "method": "bdev_nvme_set_options", 00:12:58.425 "params": { 00:12:58.425 "action_on_timeout": "none", 00:12:58.425 "timeout_us": 0, 00:12:58.425 "timeout_admin_us": 0, 00:12:58.425 "keep_alive_timeout_ms": 10000, 00:12:58.425 "transport_retry_count": 4, 00:12:58.425 "arbitration_burst": 0, 00:12:58.425 "low_priority_weight": 0, 00:12:58.425 "medium_priority_weight": 0, 00:12:58.425 "high_priority_weight": 0, 00:12:58.425 "nvme_adminq_poll_period_us": 10000, 00:12:58.425 "nvme_ioq_poll_period_us": 0, 00:12:58.425 "io_queue_requests": 512, 00:12:58.425 "delay_cmd_submit": true, 00:12:58.425 "bdev_retry_count": 3, 00:12:58.425 "transport_ack_timeout": 0, 00:12:58.425 "ctrlr_loss_timeout_sec": 0, 00:12:58.425 "reconnect_delay_sec": 0, 00:12:58.425 "fast_io_fail_timeout_sec": 0, 00:12:58.425 "generate_uuids": false, 00:12:58.425 "transport_tos": 0, 00:12:58.425 "io_path_stat": false, 00:12:58.425 "allow_accel_sequence": false 00:12:58.425 } 00:12:58.425 }, 00:12:58.425 { 00:12:58.425 "method": "bdev_nvme_attach_controller", 00:12:58.425 "params": { 00:12:58.425 "name": "TLSTEST", 00:12:58.425 "trtype": "TCP", 00:12:58.425 "adrfam": "IPv4", 00:12:58.425 "traddr": "10.0.0.2", 00:12:58.425 "trsvcid": "4420", 00:12:58.425 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:58.425 "prchk_reftag": false, 00:12:58.425 "prchk_guard": false, 00:12:58.425 "ctrlr_loss_timeout_sec": 0, 00:12:58.426 "reconnect_delay_sec": 0, 00:12:58.426 "fast_io_fail_timeout_sec": 0, 00:12:58.426 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:58.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:58.426 "hdgst": false, 00:12:58.426 "ddgst": false 00:12:58.426 } 00:12:58.426 }, 00:12:58.426 { 00:12:58.426 "method": "bdev_nvme_set_hotplug", 00:12:58.426 "params": { 00:12:58.426 "period_us": 100000, 00:12:58.426 "enable": false 00:12:58.426 } 00:12:58.426 }, 00:12:58.426 { 00:12:58.426 "method": "bdev_wait_for_examine" 00:12:58.426 } 00:12:58.426 ] 00:12:58.426 }, 00:12:58.426 { 00:12:58.426 "subsystem": "nbd", 00:12:58.426 "config": [] 00:12:58.426 } 00:12:58.426 ] 00:12:58.426 }' 00:12:58.426 15:03:28 -- target/tls.sh@208 -- # killprocess 77313 00:12:58.426 15:03:28 -- common/autotest_common.sh@936 -- # '[' -z 77313 ']' 00:12:58.426 15:03:28 -- common/autotest_common.sh@940 -- # kill -0 77313 00:12:58.426 15:03:28 -- common/autotest_common.sh@941 -- # uname 00:12:58.426 15:03:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:58.426 15:03:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77313 00:12:58.426 killing process with pid 77313 00:12:58.426 Received shutdown signal, test time was about 10.000000 seconds 00:12:58.426 00:12:58.426 Latency(us) 00:12:58.426 [2024-11-20T15:03:29.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:58.426 [2024-11-20T15:03:29.230Z] =================================================================================================================== 00:12:58.426 [2024-11-20T15:03:29.230Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:58.426 15:03:28 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:58.426 15:03:28 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:58.426 15:03:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77313' 00:12:58.426 15:03:28 -- common/autotest_common.sh@955 -- # kill 77313 00:12:58.426 15:03:28 -- common/autotest_common.sh@960 -- # wait 77313 00:12:58.426 15:03:29 -- target/tls.sh@209 -- # killprocess 77258 00:12:58.426 15:03:29 -- common/autotest_common.sh@936 -- # '[' -z 77258 ']' 00:12:58.426 15:03:29 -- common/autotest_common.sh@940 -- # kill -0 77258 00:12:58.426 15:03:29 -- common/autotest_common.sh@941 -- # uname 00:12:58.426 15:03:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:58.426 15:03:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77258 00:12:58.426 killing process with pid 77258 00:12:58.426 15:03:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:58.426 15:03:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:58.426 15:03:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77258' 00:12:58.426 15:03:29 -- common/autotest_common.sh@955 -- # kill 77258 00:12:58.426 15:03:29 -- common/autotest_common.sh@960 -- # wait 77258 00:12:58.684 15:03:29 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:12:58.684 15:03:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:58.684 15:03:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:58.684 15:03:29 -- common/autotest_common.sh@10 -- # set +x 00:12:58.684 15:03:29 -- target/tls.sh@212 -- # echo '{ 00:12:58.684 "subsystems": [ 00:12:58.684 { 00:12:58.684 "subsystem": "iobuf", 00:12:58.684 "config": [ 00:12:58.684 { 00:12:58.684 "method": "iobuf_set_options", 00:12:58.684 "params": { 00:12:58.684 "small_pool_count": 8192, 00:12:58.684 "large_pool_count": 1024, 00:12:58.684 "small_bufsize": 8192, 00:12:58.684 "large_bufsize": 135168 00:12:58.684 } 00:12:58.684 } 00:12:58.684 ] 00:12:58.684 }, 00:12:58.684 { 00:12:58.684 "subsystem": "sock", 00:12:58.684 "config": [ 00:12:58.684 { 00:12:58.684 "method": "sock_impl_set_options", 00:12:58.684 "params": { 00:12:58.684 "impl_name": "uring", 00:12:58.684 "recv_buf_size": 2097152, 00:12:58.685 "send_buf_size": 2097152, 00:12:58.685 "enable_recv_pipe": true, 00:12:58.685 "enable_quickack": false, 00:12:58.685 "enable_placement_id": 0, 00:12:58.685 "enable_zerocopy_send_server": false, 00:12:58.685 "enable_zerocopy_send_client": false, 00:12:58.685 "zerocopy_threshold": 0, 00:12:58.685 "tls_version": 0, 00:12:58.685 "enable_ktls": false 00:12:58.685 } 00:12:58.685 }, 00:12:58.685 { 00:12:58.685 "method": "sock_impl_set_options", 00:12:58.685 "params": { 00:12:58.685 "impl_name": "posix", 00:12:58.685 "recv_buf_size": 2097152, 00:12:58.685 "send_buf_size": 2097152, 00:12:58.685 "enable_recv_pipe": true, 00:12:58.685 "enable_quickack": false, 00:12:58.685 "enable_placement_id": 0, 00:12:58.685 "enable_zerocopy_send_server": true, 00:12:58.685 "enable_zerocopy_send_client": false, 00:12:58.685 "zerocopy_threshold": 0, 00:12:58.685 "tls_version": 0, 00:12:58.685 "enable_ktls": false 00:12:58.685 } 00:12:58.685 }, 00:12:58.685 { 00:12:58.685 "method": "sock_impl_set_options", 00:12:58.685 "params": { 00:12:58.685 "impl_name": "ssl", 00:12:58.685 "recv_buf_size": 4096, 00:12:58.685 "send_buf_size": 4096, 00:12:58.685 "enable_recv_pipe": true, 00:12:58.685 "enable_quickack": false, 00:12:58.685 "enable_placement_id": 0, 00:12:58.685 "enable_zerocopy_send_server": true, 00:12:58.685 "enable_zerocopy_send_client": false, 00:12:58.685 "zerocopy_threshold": 0, 00:12:58.685 "tls_version": 0, 00:12:58.685 "enable_ktls": false 00:12:58.685 } 00:12:58.685 } 00:12:58.685 ] 00:12:58.685 }, 00:12:58.685 { 00:12:58.685 "subsystem": "vmd", 00:12:58.685 "config": [] 00:12:58.685 }, 00:12:58.685 { 00:12:58.685 "subsystem": "accel", 00:12:58.685 "config": [ 00:12:58.685 { 00:12:58.685 "method": "accel_set_options", 00:12:58.685 "params": { 00:12:58.685 "small_cache_size": 128, 00:12:58.685 "large_cache_size": 16, 00:12:58.685 "task_count": 2048, 00:12:58.685 "sequence_count": 2048, 00:12:58.685 "buf_count": 2048 00:12:58.685 } 00:12:58.685 } 00:12:58.685 ] 00:12:58.685 }, 00:12:58.685 { 00:12:58.685 "subsystem": "bdev", 00:12:58.685 "config": [ 00:12:58.685 { 00:12:58.685 "method": "bdev_set_options", 00:12:58.685 "params": { 00:12:58.685 "bdev_io_pool_size": 65535, 00:12:58.685 "bdev_io_cache_size": 256, 00:12:58.685 "bdev_auto_examine": true, 00:12:58.685 "iobuf_small_cache_size": 128, 00:12:58.685 "iobuf_large_cache_size": 16 00:12:58.685 } 00:12:58.685 }, 00:12:58.685 { 00:12:58.685 "method": "bdev_raid_set_options", 00:12:58.685 "params": { 00:12:58.685 "process_window_size_kb": 1024 00:12:58.685 } 00:12:58.685 }, 00:12:58.685 { 00:12:58.685 "method": "bdev_iscsi_set_options", 00:12:58.685 "params": { 00:12:58.685 "timeout_sec": 30 00:12:58.685 } 00:12:58.685 }, 00:12:58.685 { 00:12:58.685 "method": "bdev_nvme_set_options", 00:12:58.685 "params": { 00:12:58.685 "action_on_timeout": "none", 00:12:58.685 "timeout_us": 0, 00:12:58.685 "timeout_admin_us": 0, 00:12:58.685 "keep_alive_timeout_ms": 10000, 00:12:58.685 "transport_retry_count": 4, 00:12:58.685 "arbitration_burst": 0, 00:12:58.685 "low_priority_weight": 0, 00:12:58.685 "medium_priority_weight": 0, 00:12:58.685 "high_priority_weight": 0, 00:12:58.685 "nvme_adminq_poll_period_us": 10000, 00:12:58.685 "nvme_ioq_poll_period_us": 0, 00:12:58.685 "io_queue_requests": 0, 00:12:58.685 "delay_cmd_submit": true, 00:12:58.685 "bdev_retry_count": 3, 00:12:58.685 "transport_ack_timeout": 0, 00:12:58.685 "ctrlr_loss_timeout_sec": 0, 00:12:58.685 "reconnect_delay_sec": 0, 00:12:58.685 "fast_io_fail_timeout_sec": 0, 00:12:58.685 "generate_uuids": false, 00:12:58.685 "transport_tos": 0, 00:12:58.685 "io_path_stat": false, 00:12:58.685 "allow_accel_sequence": false 00:12:58.685 } 00:12:58.685 }, 00:12:58.685 { 00:12:58.685 "method": "bdev_nvme_set_hotplug", 00:12:58.685 "params": { 00:12:58.685 "period_us": 100000, 00:12:58.685 "enable": false 00:12:58.685 } 00:12:58.685 }, 00:12:58.685 { 00:12:58.685 "method": "bdev_malloc_create", 00:12:58.685 "params": { 00:12:58.685 "name": "malloc0", 00:12:58.685 "num_blocks": 8192, 00:12:58.685 "block_size": 4096, 00:12:58.685 "physical_block_size": 4096, 00:12:58.685 "uuid": "ea189378-842a-42b2-bc3d-1430b0447de6", 00:12:58.685 "optimal_io_boundary": 0 00:12:58.685 } 00:12:58.685 }, 00:12:58.685 { 00:12:58.685 "method": "bdev_wait_for_examine" 00:12:58.685 } 00:12:58.685 ] 00:12:58.685 }, 00:12:58.685 { 00:12:58.685 "subsystem": "nbd", 00:12:58.685 "config": [] 00:12:58.685 }, 00:12:58.685 { 00:12:58.685 "subsystem": "scheduler", 00:12:58.685 "config": [ 00:12:58.685 { 00:12:58.685 "method": "framework_set_scheduler", 00:12:58.685 "params": { 00:12:58.685 "name": "static" 00:12:58.685 } 00:12:58.685 } 00:12:58.685 ] 00:12:58.685 }, 00:12:58.685 { 00:12:58.685 "subsystem": "nvmf", 00:12:58.685 "config": [ 00:12:58.685 { 00:12:58.685 "method": "nvmf_set_config", 00:12:58.685 "params": { 00:12:58.685 "discovery_filter": "match_any", 00:12:58.685 "admin_cmd_passthru": { 00:12:58.685 "identify_ctrlr": false 00:12:58.685 } 00:12:58.685 } 00:12:58.685 }, 00:12:58.685 { 00:12:58.685 "method": "nvmf_set_max_subsystems", 00:12:58.685 "params": { 00:12:58.685 "max_subsystems": 1024 00:12:58.685 } 00:12:58.685 }, 00:12:58.685 { 00:12:58.685 "method": "nvmf_set_crdt", 00:12:58.685 "params": { 00:12:58.685 "crdt1": 0, 00:12:58.685 "crdt2": 0, 00:12:58.685 "crdt3": 0 00:12:58.685 } 00:12:58.685 }, 00:12:58.685 { 00:12:58.685 "method": "nvmf_create_transport", 00:12:58.685 "params": { 00:12:58.685 "trtype": "TCP", 00:12:58.685 "max_queue_depth": 128, 00:12:58.685 "max_io_qpairs_per_ctrlr": 127, 00:12:58.685 "in_capsule_data_size": 4096, 00:12:58.685 "max_io_size": 131072, 00:12:58.685 "io_unit_size": 131072, 00:12:58.685 "max_aq_depth": 128, 00:12:58.685 "num_shared_buffers": 511, 00:12:58.685 "buf_cache_size": 4294967295, 00:12:58.685 "dif_insert_or_strip": false, 00:12:58.685 "zcopy": false, 00:12:58.685 "c2h_success": false, 00:12:58.685 "sock_priority": 0, 00:12:58.685 "abort_timeout_sec": 1 00:12:58.685 } 00:12:58.685 }, 00:12:58.685 { 00:12:58.685 "method": "nvmf_create_subsystem", 00:12:58.685 "params": { 00:12:58.685 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:58.685 "allow_any_host": false, 00:12:58.685 "serial_number": "SPDK00000000000001", 00:12:58.685 "model_number": "SPDK bdev Controller", 00:12:58.685 "max_namespaces": 10, 00:12:58.685 "min_cntlid": 1, 00:12:58.685 "max_cntlid": 65519, 00:12:58.685 "ana_reporting": false 00:12:58.685 } 00:12:58.685 }, 00:12:58.685 { 00:12:58.685 "method": "nvmf_subsystem_add_host", 00:12:58.685 "params": { 00:12:58.685 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:58.685 "host": "nqn.2016-06.io.spdk:host1", 00:12:58.685 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:12:58.685 } 00:12:58.685 }, 00:12:58.685 { 00:12:58.685 "method": "nvmf_subsystem_add_ns", 00:12:58.685 "params": { 00:12:58.685 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:58.685 "namespace": { 00:12:58.685 "nsid": 1, 00:12:58.685 "bdev_name": "malloc0", 00:12:58.685 "nguid": "EA189378842A42B2BC3D1430B0447DE6", 00:12:58.685 "uuid": "ea189378-842a-42b2-bc3d-1430b0447de6" 00:12:58.685 } 00:12:58.685 } 00:12:58.685 }, 00:12:58.685 { 00:12:58.685 "method": "nvmf_subsystem_add_listener", 00:12:58.685 "params": { 00:12:58.685 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:58.685 "listen_address": { 00:12:58.686 "trtype": "TCP", 00:12:58.686 "adrfam": "IPv4", 00:12:58.686 "traddr": "10.0.0.2", 00:12:58.686 "trsvcid": "4420" 00:12:58.686 }, 00:12:58.686 "secure_channel": true 00:12:58.686 } 00:12:58.686 } 00:12:58.686 ] 00:12:58.686 } 00:12:58.686 ] 00:12:58.686 }' 00:12:58.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.686 15:03:29 -- nvmf/common.sh@469 -- # nvmfpid=77354 00:12:58.686 15:03:29 -- nvmf/common.sh@470 -- # waitforlisten 77354 00:12:58.686 15:03:29 -- common/autotest_common.sh@829 -- # '[' -z 77354 ']' 00:12:58.686 15:03:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.686 15:03:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:58.686 15:03:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.686 15:03:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:58.686 15:03:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:12:58.686 15:03:29 -- common/autotest_common.sh@10 -- # set +x 00:12:58.686 [2024-11-20 15:03:29.362253] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:58.686 [2024-11-20 15:03:29.362799] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.944 [2024-11-20 15:03:29.499135] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.944 [2024-11-20 15:03:29.533854] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:58.944 [2024-11-20 15:03:29.534221] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.944 [2024-11-20 15:03:29.534245] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.944 [2024-11-20 15:03:29.534254] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.944 [2024-11-20 15:03:29.534291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.944 [2024-11-20 15:03:29.718945] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.202 [2024-11-20 15:03:29.750977] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:59.202 [2024-11-20 15:03:29.751304] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.770 15:03:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:59.770 15:03:30 -- common/autotest_common.sh@862 -- # return 0 00:12:59.770 15:03:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:59.770 15:03:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:59.770 15:03:30 -- common/autotest_common.sh@10 -- # set +x 00:12:59.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:59.770 15:03:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.770 15:03:30 -- target/tls.sh@216 -- # bdevperf_pid=77386 00:12:59.770 15:03:30 -- target/tls.sh@217 -- # waitforlisten 77386 /var/tmp/bdevperf.sock 00:12:59.770 15:03:30 -- common/autotest_common.sh@829 -- # '[' -z 77386 ']' 00:12:59.770 15:03:30 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:12:59.770 15:03:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:59.770 15:03:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:59.770 15:03:30 -- target/tls.sh@213 -- # echo '{ 00:12:59.770 "subsystems": [ 00:12:59.770 { 00:12:59.770 "subsystem": "iobuf", 00:12:59.770 "config": [ 00:12:59.770 { 00:12:59.770 "method": "iobuf_set_options", 00:12:59.770 "params": { 00:12:59.770 "small_pool_count": 8192, 00:12:59.770 "large_pool_count": 1024, 00:12:59.770 "small_bufsize": 8192, 00:12:59.770 "large_bufsize": 135168 00:12:59.770 } 00:12:59.770 } 00:12:59.771 ] 00:12:59.771 }, 00:12:59.771 { 00:12:59.771 "subsystem": "sock", 00:12:59.771 "config": [ 00:12:59.771 { 00:12:59.771 "method": "sock_impl_set_options", 00:12:59.771 "params": { 00:12:59.771 "impl_name": "uring", 00:12:59.771 "recv_buf_size": 2097152, 00:12:59.771 "send_buf_size": 2097152, 00:12:59.771 "enable_recv_pipe": true, 00:12:59.771 "enable_quickack": false, 00:12:59.771 "enable_placement_id": 0, 00:12:59.771 "enable_zerocopy_send_server": false, 00:12:59.771 "enable_zerocopy_send_client": false, 00:12:59.771 "zerocopy_threshold": 0, 00:12:59.771 "tls_version": 0, 00:12:59.771 "enable_ktls": false 00:12:59.771 } 00:12:59.771 }, 00:12:59.771 { 00:12:59.771 "method": "sock_impl_set_options", 00:12:59.771 "params": { 00:12:59.771 "impl_name": "posix", 00:12:59.771 "recv_buf_size": 2097152, 00:12:59.771 "send_buf_size": 2097152, 00:12:59.771 "enable_recv_pipe": true, 00:12:59.771 "enable_quickack": false, 00:12:59.771 "enable_placement_id": 0, 00:12:59.771 "enable_zerocopy_send_server": true, 00:12:59.771 "enable_zerocopy_send_client": false, 00:12:59.771 "zerocopy_threshold": 0, 00:12:59.771 "tls_version": 0, 00:12:59.771 "enable_ktls": false 00:12:59.771 } 00:12:59.771 }, 00:12:59.771 { 00:12:59.771 "method": "sock_impl_set_options", 00:12:59.771 "params": { 00:12:59.771 "impl_name": "ssl", 00:12:59.771 "recv_buf_size": 4096, 00:12:59.771 "send_buf_size": 4096, 00:12:59.771 "enable_recv_pipe": true, 00:12:59.771 "enable_quickack": false, 00:12:59.771 "enable_placement_id": 0, 00:12:59.771 "enable_zerocopy_send_server": true, 00:12:59.771 "enable_zerocopy_send_client": false, 00:12:59.771 "zerocopy_threshold": 0, 00:12:59.771 "tls_version": 0, 00:12:59.771 "enable_ktls": false 00:12:59.771 } 00:12:59.771 } 00:12:59.771 ] 00:12:59.771 }, 00:12:59.771 { 00:12:59.771 "subsystem": "vmd", 00:12:59.771 "config": [] 00:12:59.771 }, 00:12:59.771 { 00:12:59.771 "subsystem": "accel", 00:12:59.771 "config": [ 00:12:59.771 { 00:12:59.771 "method": "accel_set_options", 00:12:59.771 "params": { 00:12:59.771 "small_cache_size": 128, 00:12:59.771 "large_cache_size": 16, 00:12:59.771 "task_count": 2048, 00:12:59.771 "sequence_count": 2048, 00:12:59.771 "buf_count": 2048 00:12:59.771 } 00:12:59.771 } 00:12:59.771 ] 00:12:59.771 }, 00:12:59.771 { 00:12:59.771 "subsystem": "bdev", 00:12:59.771 "config": [ 00:12:59.771 { 00:12:59.771 "method": "bdev_set_options", 00:12:59.771 "params": { 00:12:59.771 "bdev_io_pool_size": 65535, 00:12:59.771 "bdev_io_cache_size": 256, 00:12:59.771 "bdev_auto_examine": true, 00:12:59.771 "iobuf_small_cache_size": 128, 00:12:59.771 "iobuf_large_cache_size": 16 00:12:59.771 } 00:12:59.771 }, 00:12:59.771 { 00:12:59.771 "method": "bdev_raid_set_options", 00:12:59.771 "params": { 00:12:59.771 "process_window_size_kb": 1024 00:12:59.771 } 00:12:59.771 }, 00:12:59.771 { 00:12:59.771 "method": "bdev_iscsi_set_options", 00:12:59.771 "params": { 00:12:59.771 "timeout_sec": 30 00:12:59.771 } 00:12:59.771 }, 00:12:59.771 { 00:12:59.771 "method": "bdev_nvme_set_options", 00:12:59.771 "params": { 00:12:59.771 "action_on_timeout": "none", 00:12:59.771 "timeout_us": 0, 00:12:59.771 "timeout_admin_us": 0, 00:12:59.771 "keep_alive_timeout_ms": 10000, 00:12:59.771 "transport_retry_count": 4, 00:12:59.771 "arbitration_burst": 0, 00:12:59.771 "low_priority_weight": 0, 00:12:59.771 "medium_priority_weight": 0, 00:12:59.771 "high_priority_weight": 0, 00:12:59.771 "nvme_adminq_poll_period_us": 10000, 00:12:59.771 "nvme_ioq_poll_period_us": 0, 00:12:59.771 "io_queue_requests": 512, 00:12:59.771 "delay_cmd_submit": true, 00:12:59.771 "bdev_retry_count": 3, 00:12:59.771 "transport_ack_timeout": 0, 00:12:59.771 "ctrlr_loss_timeout_sec": 0, 00:12:59.771 "reconnect_delay_sec": 0, 00:12:59.771 "fast_io_fail_timeout_sec": 0, 00:12:59.771 "generate_uuids": false, 00:12:59.771 "transport_tos": 0, 00:12:59.771 "io_path_stat": false, 00:12:59.771 "allow_accel_sequence": false 00:12:59.771 } 00:12:59.771 }, 00:12:59.771 { 00:12:59.771 "method": "bdev_nvme_attach_controller", 00:12:59.771 "params": { 00:12:59.771 "name": "TLSTEST", 00:12:59.771 "trtype": "TCP", 00:12:59.771 "adrfam": "IPv4", 00:12:59.771 "traddr": "10.0.0.2", 00:12:59.771 "trsvcid": "4420", 00:12:59.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:59.771 "prchk_reftag": false, 00:12:59.771 "prchk_guard": false, 00:12:59.771 "ctrlr_loss_timeout_sec": 0, 00:12:59.771 "reconnect_delay_sec": 0, 00:12:59.771 "fast_io_fail_timeout_sec": 0, 00:12:59.771 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:59.771 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:59.771 "hdgst": false, 00:12:59.771 "ddgst": false 00:12:59.771 } 00:12:59.771 }, 00:12:59.771 { 00:12:59.771 "method": "bdev_nvme_set_hotplug", 00:12:59.771 "params": { 00:12:59.771 "period_us": 100000, 00:12:59.771 "enable": false 00:12:59.771 } 00:12:59.771 }, 00:12:59.771 { 00:12:59.771 "method": "bdev_wait_for_examine" 00:12:59.771 } 00:12:59.771 ] 00:12:59.771 }, 00:12:59.771 { 00:12:59.771 "subsystem": "nbd", 00:12:59.771 "config": [] 00:12:59.771 } 00:12:59.771 ] 00:12:59.771 }' 00:12:59.771 15:03:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:59.771 15:03:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:59.771 15:03:30 -- common/autotest_common.sh@10 -- # set +x 00:12:59.771 [2024-11-20 15:03:30.521615] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:12:59.771 [2024-11-20 15:03:30.521984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77386 ] 00:13:00.030 [2024-11-20 15:03:30.667606] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.030 [2024-11-20 15:03:30.707014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.030 [2024-11-20 15:03:30.832666] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:00.965 15:03:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:00.965 15:03:31 -- common/autotest_common.sh@862 -- # return 0 00:13:00.965 15:03:31 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:00.965 Running I/O for 10 seconds... 00:13:10.939 00:13:10.939 Latency(us) 00:13:10.939 [2024-11-20T15:03:41.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.939 [2024-11-20T15:03:41.743Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:10.939 Verification LBA range: start 0x0 length 0x2000 00:13:10.939 TLSTESTn1 : 10.02 5162.43 20.17 0.00 0.00 24753.05 4974.78 31933.91 00:13:10.939 [2024-11-20T15:03:41.743Z] =================================================================================================================== 00:13:10.939 [2024-11-20T15:03:41.743Z] Total : 5162.43 20.17 0.00 0.00 24753.05 4974.78 31933.91 00:13:10.939 0 00:13:10.939 15:03:41 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:10.939 15:03:41 -- target/tls.sh@223 -- # killprocess 77386 00:13:10.939 15:03:41 -- common/autotest_common.sh@936 -- # '[' -z 77386 ']' 00:13:10.939 15:03:41 -- common/autotest_common.sh@940 -- # kill -0 77386 00:13:10.939 15:03:41 -- common/autotest_common.sh@941 -- # uname 00:13:10.939 15:03:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:10.939 15:03:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77386 00:13:10.939 killing process with pid 77386 00:13:10.939 Received shutdown signal, test time was about 10.000000 seconds 00:13:10.939 00:13:10.939 Latency(us) 00:13:10.939 [2024-11-20T15:03:41.743Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.939 [2024-11-20T15:03:41.743Z] =================================================================================================================== 00:13:10.939 [2024-11-20T15:03:41.743Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:10.939 15:03:41 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:10.939 15:03:41 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:10.939 15:03:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77386' 00:13:10.939 15:03:41 -- common/autotest_common.sh@955 -- # kill 77386 00:13:10.939 15:03:41 -- common/autotest_common.sh@960 -- # wait 77386 00:13:11.198 15:03:41 -- target/tls.sh@224 -- # killprocess 77354 00:13:11.198 15:03:41 -- common/autotest_common.sh@936 -- # '[' -z 77354 ']' 00:13:11.198 15:03:41 -- common/autotest_common.sh@940 -- # kill -0 77354 00:13:11.198 15:03:41 -- common/autotest_common.sh@941 -- # uname 00:13:11.198 15:03:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:11.198 15:03:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77354 00:13:11.198 killing process with pid 77354 00:13:11.198 15:03:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:11.198 15:03:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:11.198 15:03:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77354' 00:13:11.198 15:03:41 -- common/autotest_common.sh@955 -- # kill 77354 00:13:11.198 15:03:41 -- common/autotest_common.sh@960 -- # wait 77354 00:13:11.459 15:03:42 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:13:11.459 15:03:42 -- target/tls.sh@227 -- # cleanup 00:13:11.459 15:03:42 -- target/tls.sh@15 -- # process_shm --id 0 00:13:11.459 15:03:42 -- common/autotest_common.sh@806 -- # type=--id 00:13:11.459 15:03:42 -- common/autotest_common.sh@807 -- # id=0 00:13:11.459 15:03:42 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:11.459 15:03:42 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:11.459 15:03:42 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:11.459 15:03:42 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:11.459 15:03:42 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:11.459 15:03:42 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:11.459 nvmf_trace.0 00:13:11.459 Process with pid 77386 is not found 00:13:11.459 15:03:42 -- common/autotest_common.sh@821 -- # return 0 00:13:11.459 15:03:42 -- target/tls.sh@16 -- # killprocess 77386 00:13:11.459 15:03:42 -- common/autotest_common.sh@936 -- # '[' -z 77386 ']' 00:13:11.459 15:03:42 -- common/autotest_common.sh@940 -- # kill -0 77386 00:13:11.459 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77386) - No such process 00:13:11.459 15:03:42 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77386 is not found' 00:13:11.459 15:03:42 -- target/tls.sh@17 -- # nvmftestfini 00:13:11.459 15:03:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:11.459 15:03:42 -- nvmf/common.sh@116 -- # sync 00:13:11.459 15:03:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:11.459 15:03:42 -- nvmf/common.sh@119 -- # set +e 00:13:11.459 15:03:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:11.459 15:03:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:11.459 rmmod nvme_tcp 00:13:11.459 rmmod nvme_fabrics 00:13:11.459 rmmod nvme_keyring 00:13:11.459 15:03:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:11.459 15:03:42 -- nvmf/common.sh@123 -- # set -e 00:13:11.459 15:03:42 -- nvmf/common.sh@124 -- # return 0 00:13:11.459 15:03:42 -- nvmf/common.sh@477 -- # '[' -n 77354 ']' 00:13:11.459 15:03:42 -- nvmf/common.sh@478 -- # killprocess 77354 00:13:11.459 15:03:42 -- common/autotest_common.sh@936 -- # '[' -z 77354 ']' 00:13:11.459 15:03:42 -- common/autotest_common.sh@940 -- # kill -0 77354 00:13:11.459 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77354) - No such process 00:13:11.459 15:03:42 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77354 is not found' 00:13:11.459 Process with pid 77354 is not found 00:13:11.459 15:03:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:11.459 15:03:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:11.459 15:03:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:11.459 15:03:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:11.459 15:03:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:11.459 15:03:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.459 15:03:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:11.459 15:03:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.725 15:03:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:11.725 15:03:42 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:13:11.725 00:13:11.725 real 1m10.851s 00:13:11.725 user 1m51.783s 00:13:11.725 sys 0m23.440s 00:13:11.725 15:03:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:11.725 15:03:42 -- common/autotest_common.sh@10 -- # set +x 00:13:11.725 ************************************ 00:13:11.725 END TEST nvmf_tls 00:13:11.725 ************************************ 00:13:11.725 15:03:42 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:11.725 15:03:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:11.725 15:03:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:11.725 15:03:42 -- common/autotest_common.sh@10 -- # set +x 00:13:11.725 ************************************ 00:13:11.725 START TEST nvmf_fips 00:13:11.725 ************************************ 00:13:11.725 15:03:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:13:11.725 * Looking for test storage... 00:13:11.725 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:13:11.725 15:03:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:11.725 15:03:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:11.725 15:03:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:11.725 15:03:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:11.725 15:03:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:11.725 15:03:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:11.725 15:03:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:11.725 15:03:42 -- scripts/common.sh@335 -- # IFS=.-: 00:13:11.725 15:03:42 -- scripts/common.sh@335 -- # read -ra ver1 00:13:11.725 15:03:42 -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.725 15:03:42 -- scripts/common.sh@336 -- # read -ra ver2 00:13:11.725 15:03:42 -- scripts/common.sh@337 -- # local 'op=<' 00:13:11.725 15:03:42 -- scripts/common.sh@339 -- # ver1_l=2 00:13:11.725 15:03:42 -- scripts/common.sh@340 -- # ver2_l=1 00:13:11.725 15:03:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:11.725 15:03:42 -- scripts/common.sh@343 -- # case "$op" in 00:13:11.725 15:03:42 -- scripts/common.sh@344 -- # : 1 00:13:11.725 15:03:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:11.725 15:03:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.725 15:03:42 -- scripts/common.sh@364 -- # decimal 1 00:13:11.725 15:03:42 -- scripts/common.sh@352 -- # local d=1 00:13:11.725 15:03:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.725 15:03:42 -- scripts/common.sh@354 -- # echo 1 00:13:11.725 15:03:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:11.725 15:03:42 -- scripts/common.sh@365 -- # decimal 2 00:13:11.725 15:03:42 -- scripts/common.sh@352 -- # local d=2 00:13:11.725 15:03:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:11.725 15:03:42 -- scripts/common.sh@354 -- # echo 2 00:13:11.725 15:03:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:11.725 15:03:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:11.725 15:03:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:11.725 15:03:42 -- scripts/common.sh@367 -- # return 0 00:13:11.725 15:03:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:11.725 15:03:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:11.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.725 --rc genhtml_branch_coverage=1 00:13:11.725 --rc genhtml_function_coverage=1 00:13:11.725 --rc genhtml_legend=1 00:13:11.725 --rc geninfo_all_blocks=1 00:13:11.725 --rc geninfo_unexecuted_blocks=1 00:13:11.725 00:13:11.725 ' 00:13:11.725 15:03:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:11.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.725 --rc genhtml_branch_coverage=1 00:13:11.725 --rc genhtml_function_coverage=1 00:13:11.725 --rc genhtml_legend=1 00:13:11.725 --rc geninfo_all_blocks=1 00:13:11.725 --rc geninfo_unexecuted_blocks=1 00:13:11.725 00:13:11.725 ' 00:13:11.725 15:03:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:11.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.725 --rc genhtml_branch_coverage=1 00:13:11.725 --rc genhtml_function_coverage=1 00:13:11.725 --rc genhtml_legend=1 00:13:11.725 --rc geninfo_all_blocks=1 00:13:11.725 --rc geninfo_unexecuted_blocks=1 00:13:11.725 00:13:11.725 ' 00:13:11.725 15:03:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:11.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.725 --rc genhtml_branch_coverage=1 00:13:11.725 --rc genhtml_function_coverage=1 00:13:11.725 --rc genhtml_legend=1 00:13:11.725 --rc geninfo_all_blocks=1 00:13:11.725 --rc geninfo_unexecuted_blocks=1 00:13:11.726 00:13:11.726 ' 00:13:11.726 15:03:42 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:11.726 15:03:42 -- nvmf/common.sh@7 -- # uname -s 00:13:11.726 15:03:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.726 15:03:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.726 15:03:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.726 15:03:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.726 15:03:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.726 15:03:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.726 15:03:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.726 15:03:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.726 15:03:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.726 15:03:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.726 15:03:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:13:11.726 15:03:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:13:11.726 15:03:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.726 15:03:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.726 15:03:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:11.726 15:03:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:11.726 15:03:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.726 15:03:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.726 15:03:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.726 15:03:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.726 15:03:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.726 15:03:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.726 15:03:42 -- paths/export.sh@5 -- # export PATH 00:13:11.726 15:03:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.726 15:03:42 -- nvmf/common.sh@46 -- # : 0 00:13:11.726 15:03:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:11.726 15:03:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:11.726 15:03:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:11.986 15:03:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.986 15:03:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.986 15:03:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:11.986 15:03:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:11.986 15:03:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:11.986 15:03:42 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:11.986 15:03:42 -- fips/fips.sh@89 -- # check_openssl_version 00:13:11.986 15:03:42 -- fips/fips.sh@83 -- # local target=3.0.0 00:13:11.986 15:03:42 -- fips/fips.sh@85 -- # openssl version 00:13:11.986 15:03:42 -- fips/fips.sh@85 -- # awk '{print $2}' 00:13:11.986 15:03:42 -- fips/fips.sh@85 -- # ge 3.1.1 3.0.0 00:13:11.986 15:03:42 -- scripts/common.sh@375 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:13:11.986 15:03:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:11.986 15:03:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:11.986 15:03:42 -- scripts/common.sh@335 -- # IFS=.-: 00:13:11.986 15:03:42 -- scripts/common.sh@335 -- # read -ra ver1 00:13:11.986 15:03:42 -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.986 15:03:42 -- scripts/common.sh@336 -- # read -ra ver2 00:13:11.986 15:03:42 -- scripts/common.sh@337 -- # local 'op=>=' 00:13:11.986 15:03:42 -- scripts/common.sh@339 -- # ver1_l=3 00:13:11.986 15:03:42 -- scripts/common.sh@340 -- # ver2_l=3 00:13:11.986 15:03:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:11.986 15:03:42 -- scripts/common.sh@343 -- # case "$op" in 00:13:11.986 15:03:42 -- scripts/common.sh@347 -- # : 1 00:13:11.986 15:03:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:11.986 15:03:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.986 15:03:42 -- scripts/common.sh@364 -- # decimal 3 00:13:11.986 15:03:42 -- scripts/common.sh@352 -- # local d=3 00:13:11.986 15:03:42 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:11.986 15:03:42 -- scripts/common.sh@354 -- # echo 3 00:13:11.986 15:03:42 -- scripts/common.sh@364 -- # ver1[v]=3 00:13:11.986 15:03:42 -- scripts/common.sh@365 -- # decimal 3 00:13:11.986 15:03:42 -- scripts/common.sh@352 -- # local d=3 00:13:11.986 15:03:42 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:13:11.986 15:03:42 -- scripts/common.sh@354 -- # echo 3 00:13:11.986 15:03:42 -- scripts/common.sh@365 -- # ver2[v]=3 00:13:11.986 15:03:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:11.986 15:03:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:11.986 15:03:42 -- scripts/common.sh@363 -- # (( v++ )) 00:13:11.986 15:03:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.986 15:03:42 -- scripts/common.sh@364 -- # decimal 1 00:13:11.986 15:03:42 -- scripts/common.sh@352 -- # local d=1 00:13:11.986 15:03:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.986 15:03:42 -- scripts/common.sh@354 -- # echo 1 00:13:11.986 15:03:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:11.986 15:03:42 -- scripts/common.sh@365 -- # decimal 0 00:13:11.986 15:03:42 -- scripts/common.sh@352 -- # local d=0 00:13:11.986 15:03:42 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:13:11.986 15:03:42 -- scripts/common.sh@354 -- # echo 0 00:13:11.986 15:03:42 -- scripts/common.sh@365 -- # ver2[v]=0 00:13:11.986 15:03:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:11.986 15:03:42 -- scripts/common.sh@366 -- # return 0 00:13:11.986 15:03:42 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:13:11.986 15:03:42 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:13:11.986 15:03:42 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:13:11.986 15:03:42 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:13:11.986 15:03:42 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:13:11.986 15:03:42 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:13:11.986 15:03:42 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:13:11.986 15:03:42 -- fips/fips.sh@113 -- # build_openssl_config 00:13:11.986 15:03:42 -- fips/fips.sh@37 -- # cat 00:13:11.986 15:03:42 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:13:11.986 15:03:42 -- fips/fips.sh@58 -- # cat - 00:13:11.986 15:03:42 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:13:11.986 15:03:42 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:13:11.986 15:03:42 -- fips/fips.sh@116 -- # mapfile -t providers 00:13:11.986 15:03:42 -- fips/fips.sh@116 -- # openssl list -providers 00:13:11.986 15:03:42 -- fips/fips.sh@116 -- # grep name 00:13:11.986 15:03:42 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:13:11.986 15:03:42 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:13:11.986 15:03:42 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:13:11.986 15:03:42 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:13:11.986 15:03:42 -- common/autotest_common.sh@650 -- # local es=0 00:13:11.986 15:03:42 -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:13:11.986 15:03:42 -- fips/fips.sh@127 -- # : 00:13:11.986 15:03:42 -- common/autotest_common.sh@638 -- # local arg=openssl 00:13:11.986 15:03:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:11.986 15:03:42 -- common/autotest_common.sh@642 -- # type -t openssl 00:13:11.986 15:03:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:11.986 15:03:42 -- common/autotest_common.sh@644 -- # type -P openssl 00:13:11.986 15:03:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:11.986 15:03:42 -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:13:11.986 15:03:42 -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:13:11.986 15:03:42 -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:13:11.986 Error setting digest 00:13:11.987 406287F92D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:13:11.987 406287F92D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:13:11.987 15:03:42 -- common/autotest_common.sh@653 -- # es=1 00:13:11.987 15:03:42 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:11.987 15:03:42 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:11.987 15:03:42 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:11.987 15:03:42 -- fips/fips.sh@130 -- # nvmftestinit 00:13:11.987 15:03:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:11.987 15:03:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:11.987 15:03:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:11.987 15:03:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:11.987 15:03:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:11.987 15:03:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.987 15:03:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:11.987 15:03:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.987 15:03:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:11.987 15:03:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:11.987 15:03:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:11.987 15:03:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:11.987 15:03:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:11.987 15:03:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:11.987 15:03:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:11.987 15:03:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:11.987 15:03:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:11.987 15:03:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:11.987 15:03:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:11.987 15:03:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:11.987 15:03:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:11.987 15:03:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:11.987 15:03:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:11.987 15:03:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:11.987 15:03:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:11.987 15:03:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:11.987 15:03:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:11.987 15:03:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:11.987 Cannot find device "nvmf_tgt_br" 00:13:11.987 15:03:42 -- nvmf/common.sh@154 -- # true 00:13:11.987 15:03:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:11.987 Cannot find device "nvmf_tgt_br2" 00:13:11.987 15:03:42 -- nvmf/common.sh@155 -- # true 00:13:11.987 15:03:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:11.987 15:03:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:11.987 Cannot find device "nvmf_tgt_br" 00:13:11.987 15:03:42 -- nvmf/common.sh@157 -- # true 00:13:11.987 15:03:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:11.987 Cannot find device "nvmf_tgt_br2" 00:13:11.987 15:03:42 -- nvmf/common.sh@158 -- # true 00:13:11.987 15:03:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:12.246 15:03:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:12.246 15:03:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:12.246 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:12.246 15:03:42 -- nvmf/common.sh@161 -- # true 00:13:12.246 15:03:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:12.246 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:12.246 15:03:42 -- nvmf/common.sh@162 -- # true 00:13:12.246 15:03:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:12.246 15:03:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:12.246 15:03:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:12.246 15:03:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:12.246 15:03:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:12.246 15:03:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:12.246 15:03:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:12.246 15:03:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:12.246 15:03:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:12.246 15:03:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:12.246 15:03:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:12.246 15:03:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:12.246 15:03:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:12.246 15:03:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:12.246 15:03:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:12.246 15:03:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:12.246 15:03:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:12.246 15:03:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:12.246 15:03:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:12.246 15:03:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:12.246 15:03:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:12.246 15:03:43 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:12.246 15:03:43 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:12.246 15:03:43 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:12.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:13:12.246 00:13:12.246 --- 10.0.0.2 ping statistics --- 00:13:12.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.246 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:13:12.246 15:03:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:12.246 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:12.246 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:13:12.246 00:13:12.246 --- 10.0.0.3 ping statistics --- 00:13:12.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.246 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:13:12.246 15:03:43 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:12.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:13:12.246 00:13:12.246 --- 10.0.0.1 ping statistics --- 00:13:12.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.246 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:13:12.246 15:03:43 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.246 15:03:43 -- nvmf/common.sh@421 -- # return 0 00:13:12.246 15:03:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:12.246 15:03:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.246 15:03:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:12.246 15:03:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:12.246 15:03:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.246 15:03:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:12.246 15:03:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:12.506 15:03:43 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:13:12.506 15:03:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:12.506 15:03:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:12.506 15:03:43 -- common/autotest_common.sh@10 -- # set +x 00:13:12.506 15:03:43 -- nvmf/common.sh@469 -- # nvmfpid=77737 00:13:12.506 15:03:43 -- nvmf/common.sh@470 -- # waitforlisten 77737 00:13:12.506 15:03:43 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:12.506 15:03:43 -- common/autotest_common.sh@829 -- # '[' -z 77737 ']' 00:13:12.506 15:03:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.506 15:03:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:12.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.506 15:03:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.506 15:03:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:12.506 15:03:43 -- common/autotest_common.sh@10 -- # set +x 00:13:12.506 [2024-11-20 15:03:43.152164] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:12.506 [2024-11-20 15:03:43.152265] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.506 [2024-11-20 15:03:43.294709] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.764 [2024-11-20 15:03:43.334358] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:12.764 [2024-11-20 15:03:43.334524] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.764 [2024-11-20 15:03:43.334539] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.764 [2024-11-20 15:03:43.334549] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.764 [2024-11-20 15:03:43.334578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.699 15:03:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:13.699 15:03:44 -- common/autotest_common.sh@862 -- # return 0 00:13:13.699 15:03:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:13.699 15:03:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:13.699 15:03:44 -- common/autotest_common.sh@10 -- # set +x 00:13:13.699 15:03:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.699 15:03:44 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:13:13.699 15:03:44 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:13.699 15:03:44 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:13.699 15:03:44 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:13:13.699 15:03:44 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:13.699 15:03:44 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:13.699 15:03:44 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:13.699 15:03:44 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:13.958 [2024-11-20 15:03:44.513789] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:13.958 [2024-11-20 15:03:44.529724] tcp.c: 914:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:13.958 [2024-11-20 15:03:44.529941] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.958 malloc0 00:13:13.958 15:03:44 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:13.958 15:03:44 -- fips/fips.sh@147 -- # bdevperf_pid=77782 00:13:13.958 15:03:44 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:13.958 15:03:44 -- fips/fips.sh@148 -- # waitforlisten 77782 /var/tmp/bdevperf.sock 00:13:13.958 15:03:44 -- common/autotest_common.sh@829 -- # '[' -z 77782 ']' 00:13:13.958 15:03:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:13.958 15:03:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:13.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:13.958 15:03:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:13.958 15:03:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:13.958 15:03:44 -- common/autotest_common.sh@10 -- # set +x 00:13:13.958 [2024-11-20 15:03:44.677709] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:13.958 [2024-11-20 15:03:44.677810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77782 ] 00:13:14.216 [2024-11-20 15:03:44.820626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.216 [2024-11-20 15:03:44.854785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.153 15:03:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:15.153 15:03:45 -- common/autotest_common.sh@862 -- # return 0 00:13:15.153 15:03:45 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:15.411 [2024-11-20 15:03:46.014233] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:15.411 TLSTESTn1 00:13:15.411 15:03:46 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:15.669 Running I/O for 10 seconds... 00:13:25.641 00:13:25.642 Latency(us) 00:13:25.642 [2024-11-20T15:03:56.446Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.642 [2024-11-20T15:03:56.446Z] Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:25.642 Verification LBA range: start 0x0 length 0x2000 00:13:25.642 TLSTESTn1 : 10.01 5354.82 20.92 0.00 0.00 23863.35 4974.78 31218.97 00:13:25.642 [2024-11-20T15:03:56.446Z] =================================================================================================================== 00:13:25.642 [2024-11-20T15:03:56.446Z] Total : 5354.82 20.92 0.00 0.00 23863.35 4974.78 31218.97 00:13:25.642 0 00:13:25.642 15:03:56 -- fips/fips.sh@1 -- # cleanup 00:13:25.642 15:03:56 -- fips/fips.sh@15 -- # process_shm --id 0 00:13:25.642 15:03:56 -- common/autotest_common.sh@806 -- # type=--id 00:13:25.642 15:03:56 -- common/autotest_common.sh@807 -- # id=0 00:13:25.642 15:03:56 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:25.642 15:03:56 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:25.642 15:03:56 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:25.642 15:03:56 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:25.642 15:03:56 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:25.642 15:03:56 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:25.642 nvmf_trace.0 00:13:25.642 15:03:56 -- common/autotest_common.sh@821 -- # return 0 00:13:25.642 15:03:56 -- fips/fips.sh@16 -- # killprocess 77782 00:13:25.642 15:03:56 -- common/autotest_common.sh@936 -- # '[' -z 77782 ']' 00:13:25.642 15:03:56 -- common/autotest_common.sh@940 -- # kill -0 77782 00:13:25.642 15:03:56 -- common/autotest_common.sh@941 -- # uname 00:13:25.642 15:03:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:25.642 15:03:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77782 00:13:25.642 15:03:56 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:25.642 15:03:56 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:25.642 15:03:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77782' 00:13:25.642 killing process with pid 77782 00:13:25.642 15:03:56 -- common/autotest_common.sh@955 -- # kill 77782 00:13:25.642 15:03:56 -- common/autotest_common.sh@960 -- # wait 77782 00:13:25.642 Received shutdown signal, test time was about 10.000000 seconds 00:13:25.642 00:13:25.642 Latency(us) 00:13:25.642 [2024-11-20T15:03:56.446Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.642 [2024-11-20T15:03:56.446Z] =================================================================================================================== 00:13:25.642 [2024-11-20T15:03:56.446Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:25.901 15:03:56 -- fips/fips.sh@17 -- # nvmftestfini 00:13:25.901 15:03:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:25.901 15:03:56 -- nvmf/common.sh@116 -- # sync 00:13:25.901 15:03:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:25.901 15:03:56 -- nvmf/common.sh@119 -- # set +e 00:13:25.901 15:03:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:25.901 15:03:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:25.901 rmmod nvme_tcp 00:13:25.901 rmmod nvme_fabrics 00:13:25.901 rmmod nvme_keyring 00:13:25.901 15:03:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:25.901 15:03:56 -- nvmf/common.sh@123 -- # set -e 00:13:25.901 15:03:56 -- nvmf/common.sh@124 -- # return 0 00:13:25.901 15:03:56 -- nvmf/common.sh@477 -- # '[' -n 77737 ']' 00:13:25.901 15:03:56 -- nvmf/common.sh@478 -- # killprocess 77737 00:13:25.901 15:03:56 -- common/autotest_common.sh@936 -- # '[' -z 77737 ']' 00:13:25.901 15:03:56 -- common/autotest_common.sh@940 -- # kill -0 77737 00:13:25.901 15:03:56 -- common/autotest_common.sh@941 -- # uname 00:13:25.901 15:03:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:25.901 15:03:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77737 00:13:25.901 15:03:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:25.901 15:03:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:25.901 15:03:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77737' 00:13:25.901 killing process with pid 77737 00:13:25.901 15:03:56 -- common/autotest_common.sh@955 -- # kill 77737 00:13:25.901 15:03:56 -- common/autotest_common.sh@960 -- # wait 77737 00:13:26.159 15:03:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:26.159 15:03:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:26.159 15:03:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:26.159 15:03:56 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:26.159 15:03:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:26.159 15:03:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.159 15:03:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.159 15:03:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.159 15:03:56 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:26.159 15:03:56 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:13:26.159 00:13:26.159 real 0m14.558s 00:13:26.159 user 0m20.065s 00:13:26.159 sys 0m5.798s 00:13:26.159 15:03:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:26.159 15:03:56 -- common/autotest_common.sh@10 -- # set +x 00:13:26.159 ************************************ 00:13:26.159 END TEST nvmf_fips 00:13:26.159 ************************************ 00:13:26.159 15:03:56 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:13:26.159 15:03:56 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:13:26.159 15:03:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:26.159 15:03:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:26.159 15:03:56 -- common/autotest_common.sh@10 -- # set +x 00:13:26.159 ************************************ 00:13:26.159 START TEST nvmf_fuzz 00:13:26.159 ************************************ 00:13:26.159 15:03:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:13:26.418 * Looking for test storage... 00:13:26.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:26.418 15:03:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:26.418 15:03:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:26.418 15:03:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:26.418 15:03:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:26.418 15:03:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:26.418 15:03:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:26.418 15:03:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:26.418 15:03:57 -- scripts/common.sh@335 -- # IFS=.-: 00:13:26.418 15:03:57 -- scripts/common.sh@335 -- # read -ra ver1 00:13:26.418 15:03:57 -- scripts/common.sh@336 -- # IFS=.-: 00:13:26.418 15:03:57 -- scripts/common.sh@336 -- # read -ra ver2 00:13:26.418 15:03:57 -- scripts/common.sh@337 -- # local 'op=<' 00:13:26.418 15:03:57 -- scripts/common.sh@339 -- # ver1_l=2 00:13:26.418 15:03:57 -- scripts/common.sh@340 -- # ver2_l=1 00:13:26.418 15:03:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:26.418 15:03:57 -- scripts/common.sh@343 -- # case "$op" in 00:13:26.418 15:03:57 -- scripts/common.sh@344 -- # : 1 00:13:26.418 15:03:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:26.418 15:03:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:26.418 15:03:57 -- scripts/common.sh@364 -- # decimal 1 00:13:26.418 15:03:57 -- scripts/common.sh@352 -- # local d=1 00:13:26.418 15:03:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:26.418 15:03:57 -- scripts/common.sh@354 -- # echo 1 00:13:26.418 15:03:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:26.418 15:03:57 -- scripts/common.sh@365 -- # decimal 2 00:13:26.418 15:03:57 -- scripts/common.sh@352 -- # local d=2 00:13:26.418 15:03:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:26.418 15:03:57 -- scripts/common.sh@354 -- # echo 2 00:13:26.418 15:03:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:26.418 15:03:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:26.418 15:03:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:26.418 15:03:57 -- scripts/common.sh@367 -- # return 0 00:13:26.418 15:03:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:26.418 15:03:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:26.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.418 --rc genhtml_branch_coverage=1 00:13:26.418 --rc genhtml_function_coverage=1 00:13:26.418 --rc genhtml_legend=1 00:13:26.418 --rc geninfo_all_blocks=1 00:13:26.418 --rc geninfo_unexecuted_blocks=1 00:13:26.418 00:13:26.418 ' 00:13:26.418 15:03:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:26.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.418 --rc genhtml_branch_coverage=1 00:13:26.418 --rc genhtml_function_coverage=1 00:13:26.418 --rc genhtml_legend=1 00:13:26.418 --rc geninfo_all_blocks=1 00:13:26.418 --rc geninfo_unexecuted_blocks=1 00:13:26.418 00:13:26.418 ' 00:13:26.418 15:03:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:26.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.418 --rc genhtml_branch_coverage=1 00:13:26.418 --rc genhtml_function_coverage=1 00:13:26.418 --rc genhtml_legend=1 00:13:26.418 --rc geninfo_all_blocks=1 00:13:26.418 --rc geninfo_unexecuted_blocks=1 00:13:26.418 00:13:26.418 ' 00:13:26.418 15:03:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:26.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.418 --rc genhtml_branch_coverage=1 00:13:26.418 --rc genhtml_function_coverage=1 00:13:26.418 --rc genhtml_legend=1 00:13:26.418 --rc geninfo_all_blocks=1 00:13:26.418 --rc geninfo_unexecuted_blocks=1 00:13:26.418 00:13:26.418 ' 00:13:26.418 15:03:57 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:26.418 15:03:57 -- nvmf/common.sh@7 -- # uname -s 00:13:26.418 15:03:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.418 15:03:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.418 15:03:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.418 15:03:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.418 15:03:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.418 15:03:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.418 15:03:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.418 15:03:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.418 15:03:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.418 15:03:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.418 15:03:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:13:26.418 15:03:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:13:26.418 15:03:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.418 15:03:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.418 15:03:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:26.418 15:03:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:26.418 15:03:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.418 15:03:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.418 15:03:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.418 15:03:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.418 15:03:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.418 15:03:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.418 15:03:57 -- paths/export.sh@5 -- # export PATH 00:13:26.419 15:03:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.419 15:03:57 -- nvmf/common.sh@46 -- # : 0 00:13:26.419 15:03:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:26.419 15:03:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:26.419 15:03:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:26.419 15:03:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.419 15:03:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.419 15:03:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:26.419 15:03:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:26.419 15:03:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:26.419 15:03:57 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:13:26.419 15:03:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:26.419 15:03:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.419 15:03:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:26.419 15:03:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:26.419 15:03:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:26.419 15:03:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.419 15:03:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.419 15:03:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.419 15:03:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:26.419 15:03:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:26.419 15:03:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:26.419 15:03:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:26.419 15:03:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:26.419 15:03:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:26.419 15:03:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:26.419 15:03:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:26.419 15:03:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:26.419 15:03:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:26.419 15:03:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:26.419 15:03:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:26.419 15:03:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:26.419 15:03:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:26.419 15:03:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:26.419 15:03:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:26.419 15:03:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:26.419 15:03:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:26.419 15:03:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:26.419 15:03:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:26.419 Cannot find device "nvmf_tgt_br" 00:13:26.419 15:03:57 -- nvmf/common.sh@154 -- # true 00:13:26.419 15:03:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:26.419 Cannot find device "nvmf_tgt_br2" 00:13:26.419 15:03:57 -- nvmf/common.sh@155 -- # true 00:13:26.419 15:03:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:26.419 15:03:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:26.419 Cannot find device "nvmf_tgt_br" 00:13:26.419 15:03:57 -- nvmf/common.sh@157 -- # true 00:13:26.419 15:03:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:26.419 Cannot find device "nvmf_tgt_br2" 00:13:26.419 15:03:57 -- nvmf/common.sh@158 -- # true 00:13:26.419 15:03:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:26.677 15:03:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:26.677 15:03:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:26.677 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:26.677 15:03:57 -- nvmf/common.sh@161 -- # true 00:13:26.677 15:03:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:26.677 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:26.677 15:03:57 -- nvmf/common.sh@162 -- # true 00:13:26.677 15:03:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:26.677 15:03:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:26.677 15:03:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:26.677 15:03:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:26.677 15:03:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:26.677 15:03:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:26.677 15:03:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:26.677 15:03:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:26.677 15:03:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:26.677 15:03:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:26.677 15:03:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:26.677 15:03:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:26.677 15:03:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:26.677 15:03:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:26.677 15:03:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:26.677 15:03:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:26.677 15:03:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:26.677 15:03:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:26.677 15:03:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:26.677 15:03:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:26.677 15:03:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:26.677 15:03:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:26.677 15:03:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:26.677 15:03:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:26.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:26.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:13:26.677 00:13:26.677 --- 10.0.0.2 ping statistics --- 00:13:26.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.677 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:26.677 15:03:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:26.677 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:26.677 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:13:26.677 00:13:26.677 --- 10.0.0.3 ping statistics --- 00:13:26.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.678 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:13:26.678 15:03:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:26.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:26.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:13:26.678 00:13:26.678 --- 10.0.0.1 ping statistics --- 00:13:26.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:26.678 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:13:26.678 15:03:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:26.678 15:03:57 -- nvmf/common.sh@421 -- # return 0 00:13:26.678 15:03:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:26.678 15:03:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:26.678 15:03:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:26.678 15:03:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:26.678 15:03:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:26.678 15:03:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:26.678 15:03:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:26.678 15:03:57 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=78115 00:13:26.678 15:03:57 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:26.678 15:03:57 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:26.678 15:03:57 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 78115 00:13:26.678 15:03:57 -- common/autotest_common.sh@829 -- # '[' -z 78115 ']' 00:13:26.678 15:03:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.678 15:03:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:26.678 15:03:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.678 15:03:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:26.678 15:03:57 -- common/autotest_common.sh@10 -- # set +x 00:13:27.245 15:03:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:27.245 15:03:57 -- common/autotest_common.sh@862 -- # return 0 00:13:27.245 15:03:57 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:27.245 15:03:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.245 15:03:57 -- common/autotest_common.sh@10 -- # set +x 00:13:27.245 15:03:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.245 15:03:57 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:13:27.245 15:03:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.245 15:03:57 -- common/autotest_common.sh@10 -- # set +x 00:13:27.245 Malloc0 00:13:27.245 15:03:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.245 15:03:57 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:27.245 15:03:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.245 15:03:57 -- common/autotest_common.sh@10 -- # set +x 00:13:27.245 15:03:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.245 15:03:57 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:27.245 15:03:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.245 15:03:57 -- common/autotest_common.sh@10 -- # set +x 00:13:27.245 15:03:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.245 15:03:57 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.245 15:03:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.245 15:03:57 -- common/autotest_common.sh@10 -- # set +x 00:13:27.245 15:03:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.245 15:03:57 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:13:27.245 15:03:57 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:13:27.504 Shutting down the fuzz application 00:13:27.504 15:03:58 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:13:27.764 Shutting down the fuzz application 00:13:27.764 15:03:58 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:27.764 15:03:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.764 15:03:58 -- common/autotest_common.sh@10 -- # set +x 00:13:27.764 15:03:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.764 15:03:58 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:27.764 15:03:58 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:13:27.764 15:03:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:27.764 15:03:58 -- nvmf/common.sh@116 -- # sync 00:13:27.764 15:03:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:27.764 15:03:58 -- nvmf/common.sh@119 -- # set +e 00:13:27.764 15:03:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:27.764 15:03:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:27.764 rmmod nvme_tcp 00:13:27.764 rmmod nvme_fabrics 00:13:27.764 rmmod nvme_keyring 00:13:27.764 15:03:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:27.764 15:03:58 -- nvmf/common.sh@123 -- # set -e 00:13:27.764 15:03:58 -- nvmf/common.sh@124 -- # return 0 00:13:27.764 15:03:58 -- nvmf/common.sh@477 -- # '[' -n 78115 ']' 00:13:27.764 15:03:58 -- nvmf/common.sh@478 -- # killprocess 78115 00:13:27.764 15:03:58 -- common/autotest_common.sh@936 -- # '[' -z 78115 ']' 00:13:27.764 15:03:58 -- common/autotest_common.sh@940 -- # kill -0 78115 00:13:27.764 15:03:58 -- common/autotest_common.sh@941 -- # uname 00:13:27.764 15:03:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:27.764 15:03:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78115 00:13:27.764 15:03:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:27.764 15:03:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:27.764 killing process with pid 78115 00:13:27.764 15:03:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78115' 00:13:27.764 15:03:58 -- common/autotest_common.sh@955 -- # kill 78115 00:13:27.764 15:03:58 -- common/autotest_common.sh@960 -- # wait 78115 00:13:28.023 15:03:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:28.023 15:03:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:28.023 15:03:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:28.023 15:03:58 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:28.023 15:03:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:28.023 15:03:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.023 15:03:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.023 15:03:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.023 15:03:58 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:28.023 15:03:58 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:13:28.023 00:13:28.023 real 0m1.745s 00:13:28.023 user 0m1.645s 00:13:28.023 sys 0m0.544s 00:13:28.023 15:03:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:28.023 15:03:58 -- common/autotest_common.sh@10 -- # set +x 00:13:28.023 ************************************ 00:13:28.023 END TEST nvmf_fuzz 00:13:28.023 ************************************ 00:13:28.023 15:03:58 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:13:28.023 15:03:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:28.023 15:03:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:28.023 15:03:58 -- common/autotest_common.sh@10 -- # set +x 00:13:28.023 ************************************ 00:13:28.023 START TEST nvmf_multiconnection 00:13:28.023 ************************************ 00:13:28.023 15:03:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:13:28.023 * Looking for test storage... 00:13:28.023 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:28.023 15:03:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:28.023 15:03:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:28.023 15:03:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:28.282 15:03:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:28.282 15:03:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:28.282 15:03:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:28.282 15:03:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:28.282 15:03:58 -- scripts/common.sh@335 -- # IFS=.-: 00:13:28.282 15:03:58 -- scripts/common.sh@335 -- # read -ra ver1 00:13:28.282 15:03:58 -- scripts/common.sh@336 -- # IFS=.-: 00:13:28.282 15:03:58 -- scripts/common.sh@336 -- # read -ra ver2 00:13:28.282 15:03:58 -- scripts/common.sh@337 -- # local 'op=<' 00:13:28.282 15:03:58 -- scripts/common.sh@339 -- # ver1_l=2 00:13:28.282 15:03:58 -- scripts/common.sh@340 -- # ver2_l=1 00:13:28.282 15:03:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:28.282 15:03:58 -- scripts/common.sh@343 -- # case "$op" in 00:13:28.282 15:03:58 -- scripts/common.sh@344 -- # : 1 00:13:28.282 15:03:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:28.282 15:03:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:28.282 15:03:58 -- scripts/common.sh@364 -- # decimal 1 00:13:28.282 15:03:58 -- scripts/common.sh@352 -- # local d=1 00:13:28.282 15:03:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:28.282 15:03:58 -- scripts/common.sh@354 -- # echo 1 00:13:28.282 15:03:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:28.282 15:03:58 -- scripts/common.sh@365 -- # decimal 2 00:13:28.282 15:03:58 -- scripts/common.sh@352 -- # local d=2 00:13:28.282 15:03:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:28.282 15:03:58 -- scripts/common.sh@354 -- # echo 2 00:13:28.282 15:03:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:28.282 15:03:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:28.282 15:03:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:28.282 15:03:58 -- scripts/common.sh@367 -- # return 0 00:13:28.282 15:03:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:28.282 15:03:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:28.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.282 --rc genhtml_branch_coverage=1 00:13:28.282 --rc genhtml_function_coverage=1 00:13:28.282 --rc genhtml_legend=1 00:13:28.282 --rc geninfo_all_blocks=1 00:13:28.282 --rc geninfo_unexecuted_blocks=1 00:13:28.282 00:13:28.282 ' 00:13:28.282 15:03:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:28.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.282 --rc genhtml_branch_coverage=1 00:13:28.282 --rc genhtml_function_coverage=1 00:13:28.282 --rc genhtml_legend=1 00:13:28.282 --rc geninfo_all_blocks=1 00:13:28.282 --rc geninfo_unexecuted_blocks=1 00:13:28.282 00:13:28.282 ' 00:13:28.282 15:03:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:28.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.282 --rc genhtml_branch_coverage=1 00:13:28.282 --rc genhtml_function_coverage=1 00:13:28.282 --rc genhtml_legend=1 00:13:28.282 --rc geninfo_all_blocks=1 00:13:28.282 --rc geninfo_unexecuted_blocks=1 00:13:28.282 00:13:28.282 ' 00:13:28.282 15:03:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:28.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.282 --rc genhtml_branch_coverage=1 00:13:28.282 --rc genhtml_function_coverage=1 00:13:28.282 --rc genhtml_legend=1 00:13:28.282 --rc geninfo_all_blocks=1 00:13:28.282 --rc geninfo_unexecuted_blocks=1 00:13:28.282 00:13:28.282 ' 00:13:28.283 15:03:58 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:28.283 15:03:58 -- nvmf/common.sh@7 -- # uname -s 00:13:28.283 15:03:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.283 15:03:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.283 15:03:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.283 15:03:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.283 15:03:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.283 15:03:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.283 15:03:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.283 15:03:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.283 15:03:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.283 15:03:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.283 15:03:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:13:28.283 15:03:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:13:28.283 15:03:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.283 15:03:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.283 15:03:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:28.283 15:03:58 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:28.283 15:03:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.283 15:03:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.283 15:03:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.283 15:03:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.283 15:03:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.283 15:03:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.283 15:03:58 -- paths/export.sh@5 -- # export PATH 00:13:28.283 15:03:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.283 15:03:58 -- nvmf/common.sh@46 -- # : 0 00:13:28.283 15:03:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:28.283 15:03:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:28.283 15:03:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:28.283 15:03:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.283 15:03:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.283 15:03:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:28.283 15:03:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:28.283 15:03:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:28.283 15:03:58 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:28.283 15:03:58 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:28.283 15:03:58 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:13:28.283 15:03:58 -- target/multiconnection.sh@16 -- # nvmftestinit 00:13:28.283 15:03:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:28.283 15:03:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:28.283 15:03:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:28.283 15:03:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:28.283 15:03:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:28.283 15:03:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.283 15:03:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.283 15:03:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.283 15:03:58 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:28.283 15:03:58 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:28.283 15:03:58 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:28.283 15:03:58 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:28.283 15:03:58 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:28.283 15:03:58 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:28.283 15:03:58 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.283 15:03:58 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:28.283 15:03:58 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:28.283 15:03:58 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:28.283 15:03:58 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:28.283 15:03:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:28.283 15:03:58 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:28.283 15:03:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.283 15:03:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:28.283 15:03:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:28.283 15:03:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:28.283 15:03:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:28.283 15:03:58 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:28.283 15:03:58 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:28.283 Cannot find device "nvmf_tgt_br" 00:13:28.283 15:03:58 -- nvmf/common.sh@154 -- # true 00:13:28.283 15:03:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:28.283 Cannot find device "nvmf_tgt_br2" 00:13:28.283 15:03:58 -- nvmf/common.sh@155 -- # true 00:13:28.283 15:03:58 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:28.283 15:03:58 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:28.283 Cannot find device "nvmf_tgt_br" 00:13:28.283 15:03:58 -- nvmf/common.sh@157 -- # true 00:13:28.283 15:03:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:28.283 Cannot find device "nvmf_tgt_br2" 00:13:28.283 15:03:58 -- nvmf/common.sh@158 -- # true 00:13:28.283 15:03:58 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:28.283 15:03:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:28.283 15:03:59 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:28.283 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:28.283 15:03:59 -- nvmf/common.sh@161 -- # true 00:13:28.283 15:03:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:28.283 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:28.283 15:03:59 -- nvmf/common.sh@162 -- # true 00:13:28.283 15:03:59 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:28.283 15:03:59 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:28.283 15:03:59 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:28.283 15:03:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:28.542 15:03:59 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:28.542 15:03:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:28.542 15:03:59 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:28.542 15:03:59 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:28.542 15:03:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:28.542 15:03:59 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:28.542 15:03:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:28.542 15:03:59 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:28.542 15:03:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:28.542 15:03:59 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:28.542 15:03:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:28.542 15:03:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:28.542 15:03:59 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:28.542 15:03:59 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:28.542 15:03:59 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:28.542 15:03:59 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:28.542 15:03:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:28.542 15:03:59 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:28.542 15:03:59 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:28.542 15:03:59 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:28.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:28.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:13:28.542 00:13:28.542 --- 10.0.0.2 ping statistics --- 00:13:28.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.542 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:13:28.542 15:03:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:28.542 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:28.542 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:13:28.542 00:13:28.542 --- 10.0.0.3 ping statistics --- 00:13:28.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.542 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:13:28.542 15:03:59 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:28.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:28.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:13:28.542 00:13:28.542 --- 10.0.0.1 ping statistics --- 00:13:28.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.542 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:13:28.542 15:03:59 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:28.542 15:03:59 -- nvmf/common.sh@421 -- # return 0 00:13:28.542 15:03:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:28.542 15:03:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:28.542 15:03:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:28.542 15:03:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:28.542 15:03:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:28.542 15:03:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:28.542 15:03:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:28.542 15:03:59 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:13:28.542 15:03:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:28.542 15:03:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:28.542 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:28.542 15:03:59 -- nvmf/common.sh@469 -- # nvmfpid=78296 00:13:28.542 15:03:59 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:28.542 15:03:59 -- nvmf/common.sh@470 -- # waitforlisten 78296 00:13:28.542 15:03:59 -- common/autotest_common.sh@829 -- # '[' -z 78296 ']' 00:13:28.542 15:03:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.542 15:03:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:28.542 15:03:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.542 15:03:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:28.542 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:28.868 [2024-11-20 15:03:59.352685] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:13:28.868 [2024-11-20 15:03:59.352812] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.868 [2024-11-20 15:03:59.491861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:28.868 [2024-11-20 15:03:59.529605] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:28.868 [2024-11-20 15:03:59.529757] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.868 [2024-11-20 15:03:59.529771] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.868 [2024-11-20 15:03:59.529781] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.868 [2024-11-20 15:03:59.529905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.868 [2024-11-20 15:03:59.530726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.868 [2024-11-20 15:03:59.530787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.868 [2024-11-20 15:03:59.530793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.868 15:03:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:28.868 15:03:59 -- common/autotest_common.sh@862 -- # return 0 00:13:28.868 15:03:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:28.868 15:03:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:28.868 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.129 15:03:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.129 15:03:59 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:29.129 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.129 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.129 [2024-11-20 15:03:59.648141] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.129 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.129 15:03:59 -- target/multiconnection.sh@21 -- # seq 1 11 00:13:29.129 15:03:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:29.129 15:03:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:29.129 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.129 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.129 Malloc1 00:13:29.129 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.129 15:03:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:13:29.129 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.129 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.129 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.129 15:03:59 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:29.129 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.129 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.129 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.129 15:03:59 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.129 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.129 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.129 [2024-11-20 15:03:59.716502] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.129 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.130 15:03:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:29.130 15:03:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:13:29.130 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.130 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.130 Malloc2 00:13:29.130 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.130 15:03:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:29.130 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.130 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.130 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.130 15:03:59 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:13:29.130 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.130 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.130 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.130 15:03:59 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:29.130 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.130 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.130 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.130 15:03:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:29.130 15:03:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:13:29.130 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.130 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.130 Malloc3 00:13:29.130 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.130 15:03:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:13:29.130 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.130 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.130 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.130 15:03:59 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:13:29.130 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.130 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.130 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.130 15:03:59 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:29.130 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.130 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.130 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.130 15:03:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:29.130 15:03:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:13:29.130 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.130 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.130 Malloc4 00:13:29.130 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.130 15:03:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:13:29.130 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.130 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.130 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.130 15:03:59 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:13:29.130 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.130 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.130 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.130 15:03:59 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:29.130 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.130 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.130 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.130 15:03:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:29.130 15:03:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:13:29.130 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.130 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.130 Malloc5 00:13:29.130 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.130 15:03:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:13:29.130 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.130 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.130 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.130 15:03:59 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:13:29.130 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.130 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.130 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.130 15:03:59 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:13:29.130 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.130 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.130 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.130 15:03:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:29.130 15:03:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:13:29.130 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.130 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.130 Malloc6 00:13:29.130 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.130 15:03:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:13:29.130 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.130 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.130 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.130 15:03:59 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:13:29.130 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.130 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.130 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.130 15:03:59 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:13:29.130 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.130 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.388 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.388 15:03:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:29.388 15:03:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:13:29.388 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.388 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.388 Malloc7 00:13:29.388 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.388 15:03:59 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:13:29.388 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.388 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.388 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.388 15:03:59 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:13:29.388 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.388 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.388 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.388 15:03:59 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:13:29.388 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.388 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.388 15:03:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.388 15:03:59 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:29.388 15:03:59 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:13:29.388 15:03:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.388 15:03:59 -- common/autotest_common.sh@10 -- # set +x 00:13:29.388 Malloc8 00:13:29.388 15:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.388 15:04:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:13:29.388 15:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.388 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:13:29.388 15:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.388 15:04:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:13:29.388 15:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.388 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:13:29.388 15:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.388 15:04:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:13:29.388 15:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.388 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:13:29.388 15:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.388 15:04:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:29.388 15:04:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:13:29.388 15:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.388 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:13:29.388 Malloc9 00:13:29.388 15:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.388 15:04:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:13:29.388 15:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.388 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:13:29.388 15:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.388 15:04:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:13:29.388 15:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.388 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:13:29.388 15:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.388 15:04:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:13:29.388 15:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.388 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:13:29.388 15:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.388 15:04:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:29.388 15:04:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:13:29.388 15:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.388 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:13:29.388 Malloc10 00:13:29.388 15:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.388 15:04:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:13:29.388 15:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.388 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:13:29.388 15:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.388 15:04:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:13:29.388 15:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.388 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:13:29.388 15:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.388 15:04:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:13:29.388 15:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.388 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:13:29.388 15:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.388 15:04:00 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:29.389 15:04:00 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:13:29.389 15:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.389 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:13:29.389 Malloc11 00:13:29.389 15:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.389 15:04:00 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:13:29.389 15:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.389 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:13:29.389 15:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.389 15:04:00 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:13:29.389 15:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.389 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:13:29.389 15:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.389 15:04:00 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:13:29.389 15:04:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.389 15:04:00 -- common/autotest_common.sh@10 -- # set +x 00:13:29.389 15:04:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.389 15:04:00 -- target/multiconnection.sh@28 -- # seq 1 11 00:13:29.389 15:04:00 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:29.389 15:04:00 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece --hostid=79d23ff5-0d62-4c46-bc89-ee8b51440ece -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:29.647 15:04:00 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:13:29.647 15:04:00 -- common/autotest_common.sh@1187 -- # local i=0 00:13:29.647 15:04:00 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:29.647 15:04:00 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:29.647 15:04:00 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:31.550 15:04:02 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:31.550 15:04:02 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:31.550 15:04:02 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:13:31.550 15:04:02 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:31.550 15:04:02 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:31.550 15:04:02 -- common/autotest_common.sh@1197 -- # return 0 00:13:31.550 15:04:02 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:31.550 15:04:02 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece --hostid=79d23ff5-0d62-4c46-bc89-ee8b51440ece -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:13:31.808 15:04:02 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:13:31.808 15:04:02 -- common/autotest_common.sh@1187 -- # local i=0 00:13:31.808 15:04:02 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:31.808 15:04:02 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:31.808 15:04:02 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:33.737 15:04:04 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:33.737 15:04:04 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:33.737 15:04:04 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:13:33.737 15:04:04 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:33.737 15:04:04 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:33.737 15:04:04 -- common/autotest_common.sh@1197 -- # return 0 00:13:33.737 15:04:04 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:33.737 15:04:04 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece --hostid=79d23ff5-0d62-4c46-bc89-ee8b51440ece -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:13:33.993 15:04:04 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:13:33.993 15:04:04 -- common/autotest_common.sh@1187 -- # local i=0 00:13:33.993 15:04:04 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:33.993 15:04:04 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:33.993 15:04:04 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:35.893 15:04:06 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:35.893 15:04:06 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:35.893 15:04:06 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:13:35.893 15:04:06 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:35.893 15:04:06 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:35.893 15:04:06 -- common/autotest_common.sh@1197 -- # return 0 00:13:35.893 15:04:06 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:35.893 15:04:06 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece --hostid=79d23ff5-0d62-4c46-bc89-ee8b51440ece -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:13:36.151 15:04:06 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:13:36.151 15:04:06 -- common/autotest_common.sh@1187 -- # local i=0 00:13:36.151 15:04:06 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:36.151 15:04:06 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:36.151 15:04:06 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:38.051 15:04:08 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:38.051 15:04:08 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:13:38.051 15:04:08 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:38.051 15:04:08 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:38.051 15:04:08 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:38.051 15:04:08 -- common/autotest_common.sh@1197 -- # return 0 00:13:38.051 15:04:08 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:38.051 15:04:08 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece --hostid=79d23ff5-0d62-4c46-bc89-ee8b51440ece -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:13:38.309 15:04:08 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:13:38.309 15:04:08 -- common/autotest_common.sh@1187 -- # local i=0 00:13:38.309 15:04:08 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:38.309 15:04:08 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:38.309 15:04:08 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:40.214 15:04:10 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:40.214 15:04:10 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:40.214 15:04:10 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:13:40.214 15:04:10 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:40.214 15:04:10 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:40.214 15:04:10 -- common/autotest_common.sh@1197 -- # return 0 00:13:40.214 15:04:10 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:40.214 15:04:10 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece --hostid=79d23ff5-0d62-4c46-bc89-ee8b51440ece -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:13:40.472 15:04:11 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:13:40.472 15:04:11 -- common/autotest_common.sh@1187 -- # local i=0 00:13:40.472 15:04:11 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:40.472 15:04:11 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:40.472 15:04:11 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:42.371 15:04:13 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:42.371 15:04:13 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:42.371 15:04:13 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:13:42.371 15:04:13 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:42.371 15:04:13 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:42.371 15:04:13 -- common/autotest_common.sh@1197 -- # return 0 00:13:42.371 15:04:13 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:42.371 15:04:13 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece --hostid=79d23ff5-0d62-4c46-bc89-ee8b51440ece -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:13:42.630 15:04:13 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:13:42.630 15:04:13 -- common/autotest_common.sh@1187 -- # local i=0 00:13:42.630 15:04:13 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:42.630 15:04:13 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:42.630 15:04:13 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:44.533 15:04:15 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:44.533 15:04:15 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:44.533 15:04:15 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:13:44.533 15:04:15 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:44.533 15:04:15 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.533 15:04:15 -- common/autotest_common.sh@1197 -- # return 0 00:13:44.533 15:04:15 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:44.533 15:04:15 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece --hostid=79d23ff5-0d62-4c46-bc89-ee8b51440ece -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:13:44.791 15:04:15 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:13:44.791 15:04:15 -- common/autotest_common.sh@1187 -- # local i=0 00:13:44.791 15:04:15 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:44.791 15:04:15 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:44.791 15:04:15 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:46.692 15:04:17 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:46.692 15:04:17 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:46.692 15:04:17 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:13:46.692 15:04:17 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:46.692 15:04:17 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:46.692 15:04:17 -- common/autotest_common.sh@1197 -- # return 0 00:13:46.692 15:04:17 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:46.692 15:04:17 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece --hostid=79d23ff5-0d62-4c46-bc89-ee8b51440ece -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:13:46.950 15:04:17 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:13:46.950 15:04:17 -- common/autotest_common.sh@1187 -- # local i=0 00:13:46.950 15:04:17 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:46.950 15:04:17 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:46.950 15:04:17 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:48.852 15:04:19 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:48.852 15:04:19 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:48.852 15:04:19 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:13:48.852 15:04:19 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:48.852 15:04:19 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:48.852 15:04:19 -- common/autotest_common.sh@1197 -- # return 0 00:13:48.852 15:04:19 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:48.852 15:04:19 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece --hostid=79d23ff5-0d62-4c46-bc89-ee8b51440ece -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:13:49.110 15:04:19 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:13:49.110 15:04:19 -- common/autotest_common.sh@1187 -- # local i=0 00:13:49.110 15:04:19 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:49.110 15:04:19 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:49.110 15:04:19 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:51.033 15:04:21 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:51.033 15:04:21 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:13:51.033 15:04:21 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:51.033 15:04:21 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:51.033 15:04:21 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:51.033 15:04:21 -- common/autotest_common.sh@1197 -- # return 0 00:13:51.033 15:04:21 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:51.033 15:04:21 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece --hostid=79d23ff5-0d62-4c46-bc89-ee8b51440ece -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:13:51.293 15:04:21 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:13:51.293 15:04:21 -- common/autotest_common.sh@1187 -- # local i=0 00:13:51.293 15:04:21 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:13:51.293 15:04:21 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:13:51.293 15:04:21 -- common/autotest_common.sh@1194 -- # sleep 2 00:13:53.194 15:04:23 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:13:53.194 15:04:23 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:13:53.194 15:04:23 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:13:53.194 15:04:23 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:13:53.194 15:04:23 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:13:53.194 15:04:23 -- common/autotest_common.sh@1197 -- # return 0 00:13:53.194 15:04:23 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:13:53.194 [global] 00:13:53.194 thread=1 00:13:53.194 invalidate=1 00:13:53.194 rw=read 00:13:53.194 time_based=1 00:13:53.194 runtime=10 00:13:53.194 ioengine=libaio 00:13:53.194 direct=1 00:13:53.194 bs=262144 00:13:53.194 iodepth=64 00:13:53.194 norandommap=1 00:13:53.194 numjobs=1 00:13:53.194 00:13:53.194 [job0] 00:13:53.194 filename=/dev/nvme0n1 00:13:53.194 [job1] 00:13:53.194 filename=/dev/nvme10n1 00:13:53.194 [job2] 00:13:53.194 filename=/dev/nvme1n1 00:13:53.194 [job3] 00:13:53.194 filename=/dev/nvme2n1 00:13:53.194 [job4] 00:13:53.194 filename=/dev/nvme3n1 00:13:53.194 [job5] 00:13:53.194 filename=/dev/nvme4n1 00:13:53.194 [job6] 00:13:53.194 filename=/dev/nvme5n1 00:13:53.194 [job7] 00:13:53.194 filename=/dev/nvme6n1 00:13:53.194 [job8] 00:13:53.194 filename=/dev/nvme7n1 00:13:53.194 [job9] 00:13:53.194 filename=/dev/nvme8n1 00:13:53.194 [job10] 00:13:53.194 filename=/dev/nvme9n1 00:13:53.453 Could not set queue depth (nvme0n1) 00:13:53.453 Could not set queue depth (nvme10n1) 00:13:53.453 Could not set queue depth (nvme1n1) 00:13:53.453 Could not set queue depth (nvme2n1) 00:13:53.453 Could not set queue depth (nvme3n1) 00:13:53.453 Could not set queue depth (nvme4n1) 00:13:53.453 Could not set queue depth (nvme5n1) 00:13:53.453 Could not set queue depth (nvme6n1) 00:13:53.453 Could not set queue depth (nvme7n1) 00:13:53.453 Could not set queue depth (nvme8n1) 00:13:53.453 Could not set queue depth (nvme9n1) 00:13:53.453 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:53.453 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:53.453 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:53.453 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:53.453 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:53.453 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:53.453 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:53.453 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:53.453 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:53.453 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:53.453 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:53.453 fio-3.35 00:13:53.453 Starting 11 threads 00:14:05.686 00:14:05.687 job0: (groupid=0, jobs=1): err= 0: pid=78748: Wed Nov 20 15:04:34 2024 00:14:05.687 read: IOPS=531, BW=133MiB/s (139MB/s)(1340MiB/10088msec) 00:14:05.687 slat (usec): min=17, max=43819, avg=1860.97, stdev=4242.39 00:14:05.687 clat (msec): min=55, max=202, avg=118.42, stdev=11.85 00:14:05.687 lat (msec): min=56, max=202, avg=120.28, stdev=12.20 00:14:05.687 clat percentiles (msec): 00:14:05.687 | 1.00th=[ 103], 5.00th=[ 107], 10.00th=[ 109], 20.00th=[ 111], 00:14:05.687 | 30.00th=[ 113], 40.00th=[ 114], 50.00th=[ 116], 60.00th=[ 118], 00:14:05.687 | 70.00th=[ 121], 80.00th=[ 126], 90.00th=[ 134], 95.00th=[ 140], 00:14:05.687 | 99.00th=[ 153], 99.50th=[ 167], 99.90th=[ 192], 99.95th=[ 203], 00:14:05.687 | 99.99th=[ 203] 00:14:05.687 bw ( KiB/s): min=115712, max=144384, per=6.45%, avg=135453.10, stdev=9206.86, samples=20 00:14:05.687 iops : min= 452, max= 564, avg=528.80, stdev=35.85, samples=20 00:14:05.687 lat (msec) : 100=0.86%, 250=99.14% 00:14:05.687 cpu : usr=0.25%, sys=2.22%, ctx=1321, majf=0, minf=4097 00:14:05.687 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:05.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:05.687 issued rwts: total=5359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.687 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:05.687 job1: (groupid=0, jobs=1): err= 0: pid=78749: Wed Nov 20 15:04:34 2024 00:14:05.687 read: IOPS=1942, BW=486MiB/s (509MB/s)(4861MiB/10009msec) 00:14:05.687 slat (usec): min=17, max=19560, avg=509.94, stdev=1042.06 00:14:05.687 clat (usec): min=7383, max=52610, avg=32397.99, stdev=2789.46 00:14:05.687 lat (usec): min=10385, max=52658, avg=32907.92, stdev=2804.71 00:14:05.687 clat percentiles (usec): 00:14:05.687 | 1.00th=[26608], 5.00th=[29230], 10.00th=[30016], 20.00th=[30802], 00:14:05.687 | 30.00th=[31327], 40.00th=[31589], 50.00th=[32113], 60.00th=[32637], 00:14:05.687 | 70.00th=[33162], 80.00th=[33817], 90.00th=[35390], 95.00th=[38011], 00:14:05.687 | 99.00th=[41157], 99.50th=[42206], 99.90th=[44827], 99.95th=[48497], 00:14:05.687 | 99.99th=[52691] 00:14:05.687 bw ( KiB/s): min=439441, max=519153, per=23.65%, avg=496842.37, stdev=18278.38, samples=19 00:14:05.687 iops : min= 1716, max= 2027, avg=1940.68, stdev=71.43, samples=19 00:14:05.687 lat (msec) : 10=0.01%, 20=0.35%, 50=99.60%, 100=0.04% 00:14:05.687 cpu : usr=0.80%, sys=6.85%, ctx=4034, majf=0, minf=4097 00:14:05.687 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:14:05.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:05.687 issued rwts: total=19443,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.687 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:05.687 job2: (groupid=0, jobs=1): err= 0: pid=78750: Wed Nov 20 15:04:34 2024 00:14:05.687 read: IOPS=529, BW=132MiB/s (139MB/s)(1334MiB/10087msec) 00:14:05.687 slat (usec): min=17, max=59020, avg=1869.01, stdev=4339.05 00:14:05.687 clat (msec): min=50, max=200, avg=118.91, stdev=12.64 00:14:05.687 lat (msec): min=50, max=200, avg=120.78, stdev=13.00 00:14:05.687 clat percentiles (msec): 00:14:05.687 | 1.00th=[ 88], 5.00th=[ 108], 10.00th=[ 109], 20.00th=[ 111], 00:14:05.687 | 30.00th=[ 113], 40.00th=[ 115], 50.00th=[ 117], 60.00th=[ 120], 00:14:05.687 | 70.00th=[ 122], 80.00th=[ 127], 90.00th=[ 134], 95.00th=[ 142], 00:14:05.687 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 201], 99.95th=[ 201], 00:14:05.687 | 99.99th=[ 201] 00:14:05.687 bw ( KiB/s): min=109056, max=143872, per=6.42%, avg=134946.10, stdev=9860.68, samples=20 00:14:05.687 iops : min= 426, max= 562, avg=526.95, stdev=38.46, samples=20 00:14:05.687 lat (msec) : 100=1.31%, 250=98.69% 00:14:05.687 cpu : usr=0.23%, sys=2.10%, ctx=1300, majf=0, minf=4097 00:14:05.687 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:05.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:05.687 issued rwts: total=5337,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.687 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:05.687 job3: (groupid=0, jobs=1): err= 0: pid=78751: Wed Nov 20 15:04:34 2024 00:14:05.687 read: IOPS=527, BW=132MiB/s (138MB/s)(1330MiB/10087msec) 00:14:05.687 slat (usec): min=17, max=88517, avg=1875.14, stdev=4466.64 00:14:05.687 clat (msec): min=19, max=204, avg=119.27, stdev=12.82 00:14:05.687 lat (msec): min=19, max=210, avg=121.15, stdev=13.17 00:14:05.687 clat percentiles (msec): 00:14:05.687 | 1.00th=[ 85], 5.00th=[ 108], 10.00th=[ 109], 20.00th=[ 112], 00:14:05.687 | 30.00th=[ 113], 40.00th=[ 115], 50.00th=[ 116], 60.00th=[ 118], 00:14:05.687 | 70.00th=[ 122], 80.00th=[ 126], 90.00th=[ 134], 95.00th=[ 146], 00:14:05.687 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 199], 99.95th=[ 205], 00:14:05.687 | 99.99th=[ 205] 00:14:05.687 bw ( KiB/s): min=108032, max=146432, per=6.40%, avg=134481.10, stdev=10133.94, samples=20 00:14:05.687 iops : min= 422, max= 572, avg=525.00, stdev=39.46, samples=20 00:14:05.687 lat (msec) : 20=0.02%, 100=1.33%, 250=98.65% 00:14:05.687 cpu : usr=0.23%, sys=2.02%, ctx=1367, majf=0, minf=4097 00:14:05.687 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:05.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:05.687 issued rwts: total=5321,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.687 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:05.687 job4: (groupid=0, jobs=1): err= 0: pid=78752: Wed Nov 20 15:04:34 2024 00:14:05.687 read: IOPS=985, BW=246MiB/s (258MB/s)(2467MiB/10015msec) 00:14:05.687 slat (usec): min=17, max=90292, avg=996.42, stdev=2639.74 00:14:05.687 clat (msec): min=2, max=174, avg=63.87, stdev=21.60 00:14:05.687 lat (msec): min=2, max=215, avg=64.87, stdev=21.90 00:14:05.687 clat percentiles (msec): 00:14:05.687 | 1.00th=[ 38], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 56], 00:14:05.687 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 61], 00:14:05.687 | 70.00th=[ 62], 80.00th=[ 64], 90.00th=[ 68], 95.00th=[ 130], 00:14:05.687 | 99.00th=[ 163], 99.50th=[ 167], 99.90th=[ 176], 99.95th=[ 176], 00:14:05.687 | 99.99th=[ 176] 00:14:05.687 bw ( KiB/s): min=99527, max=284672, per=11.97%, avg=251546.11, stdev=56755.78, samples=19 00:14:05.687 iops : min= 388, max= 1112, avg=982.26, stdev=221.68, samples=19 00:14:05.687 lat (msec) : 4=0.04%, 10=0.09%, 20=0.12%, 50=2.68%, 100=91.13% 00:14:05.687 lat (msec) : 250=5.94% 00:14:05.687 cpu : usr=0.47%, sys=3.65%, ctx=2120, majf=0, minf=4097 00:14:05.687 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:05.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:05.687 issued rwts: total=9866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.687 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:05.687 job5: (groupid=0, jobs=1): err= 0: pid=78753: Wed Nov 20 15:04:34 2024 00:14:05.687 read: IOPS=527, BW=132MiB/s (138MB/s)(1330MiB/10088msec) 00:14:05.687 slat (usec): min=18, max=76168, avg=1877.41, stdev=4263.99 00:14:05.687 clat (msec): min=44, max=198, avg=119.39, stdev=11.96 00:14:05.687 lat (msec): min=44, max=198, avg=121.27, stdev=12.22 00:14:05.687 clat percentiles (msec): 00:14:05.687 | 1.00th=[ 105], 5.00th=[ 108], 10.00th=[ 109], 20.00th=[ 111], 00:14:05.687 | 30.00th=[ 113], 40.00th=[ 115], 50.00th=[ 116], 60.00th=[ 118], 00:14:05.687 | 70.00th=[ 122], 80.00th=[ 127], 90.00th=[ 136], 95.00th=[ 144], 00:14:05.687 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 192], 99.95th=[ 199], 00:14:05.687 | 99.99th=[ 199] 00:14:05.687 bw ( KiB/s): min=100553, max=144095, per=6.40%, avg=134441.65, stdev=11369.99, samples=20 00:14:05.687 iops : min= 392, max= 562, avg=524.85, stdev=44.41, samples=20 00:14:05.687 lat (msec) : 50=0.08%, 100=0.19%, 250=99.74% 00:14:05.687 cpu : usr=0.33%, sys=2.23%, ctx=1278, majf=0, minf=4097 00:14:05.687 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:05.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.687 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:05.687 issued rwts: total=5319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.687 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:05.687 job6: (groupid=0, jobs=1): err= 0: pid=78754: Wed Nov 20 15:04:34 2024 00:14:05.687 read: IOPS=516, BW=129MiB/s (135MB/s)(1304MiB/10091msec) 00:14:05.688 slat (usec): min=18, max=53087, avg=1916.85, stdev=4216.18 00:14:05.688 clat (msec): min=64, max=203, avg=121.79, stdev=11.16 00:14:05.688 lat (msec): min=74, max=203, avg=123.70, stdev=11.38 00:14:05.688 clat percentiles (msec): 00:14:05.688 | 1.00th=[ 96], 5.00th=[ 107], 10.00th=[ 110], 20.00th=[ 113], 00:14:05.688 | 30.00th=[ 116], 40.00th=[ 120], 50.00th=[ 122], 60.00th=[ 124], 00:14:05.688 | 70.00th=[ 127], 80.00th=[ 130], 90.00th=[ 136], 95.00th=[ 140], 00:14:05.688 | 99.00th=[ 150], 99.50th=[ 157], 99.90th=[ 197], 99.95th=[ 197], 00:14:05.688 | 99.99th=[ 203] 00:14:05.688 bw ( KiB/s): min=112865, max=145920, per=6.27%, avg=131786.50, stdev=9090.03, samples=20 00:14:05.688 iops : min= 440, max= 570, avg=514.60, stdev=35.57, samples=20 00:14:05.688 lat (msec) : 100=1.96%, 250=98.04% 00:14:05.688 cpu : usr=0.35%, sys=2.07%, ctx=1338, majf=0, minf=4097 00:14:05.688 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:05.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:05.688 issued rwts: total=5214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.688 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:05.688 job7: (groupid=0, jobs=1): err= 0: pid=78755: Wed Nov 20 15:04:34 2024 00:14:05.688 read: IOPS=511, BW=128MiB/s (134MB/s)(1289MiB/10090msec) 00:14:05.688 slat (usec): min=18, max=78195, avg=1936.26, stdev=4485.26 00:14:05.688 clat (msec): min=37, max=195, avg=123.12, stdev=11.58 00:14:05.688 lat (msec): min=37, max=213, avg=125.06, stdev=11.81 00:14:05.688 clat percentiles (msec): 00:14:05.688 | 1.00th=[ 95], 5.00th=[ 108], 10.00th=[ 111], 20.00th=[ 115], 00:14:05.688 | 30.00th=[ 117], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 126], 00:14:05.688 | 70.00th=[ 128], 80.00th=[ 131], 90.00th=[ 138], 95.00th=[ 142], 00:14:05.688 | 99.00th=[ 155], 99.50th=[ 161], 99.90th=[ 192], 99.95th=[ 197], 00:14:05.688 | 99.99th=[ 197] 00:14:05.688 bw ( KiB/s): min=114688, max=142848, per=6.20%, avg=130344.75, stdev=7953.33, samples=20 00:14:05.688 iops : min= 448, max= 558, avg=509.05, stdev=31.08, samples=20 00:14:05.688 lat (msec) : 50=0.10%, 100=1.73%, 250=98.18% 00:14:05.688 cpu : usr=0.26%, sys=1.98%, ctx=1247, majf=0, minf=4097 00:14:05.688 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:05.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:05.688 issued rwts: total=5157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.688 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:05.688 job8: (groupid=0, jobs=1): err= 0: pid=78756: Wed Nov 20 15:04:34 2024 00:14:05.688 read: IOPS=513, BW=128MiB/s (135MB/s)(1296MiB/10094msec) 00:14:05.688 slat (usec): min=17, max=97365, avg=1925.67, stdev=4429.96 00:14:05.688 clat (msec): min=15, max=201, avg=122.57, stdev=12.93 00:14:05.688 lat (msec): min=16, max=211, avg=124.50, stdev=13.20 00:14:05.688 clat percentiles (msec): 00:14:05.688 | 1.00th=[ 84], 5.00th=[ 106], 10.00th=[ 110], 20.00th=[ 114], 00:14:05.688 | 30.00th=[ 117], 40.00th=[ 120], 50.00th=[ 123], 60.00th=[ 125], 00:14:05.688 | 70.00th=[ 128], 80.00th=[ 131], 90.00th=[ 138], 95.00th=[ 142], 00:14:05.688 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 197], 99.95th=[ 199], 00:14:05.688 | 99.99th=[ 203] 00:14:05.688 bw ( KiB/s): min=115943, max=143872, per=6.23%, avg=130964.85, stdev=8282.89, samples=20 00:14:05.688 iops : min= 452, max= 562, avg=511.20, stdev=32.33, samples=20 00:14:05.688 lat (msec) : 20=0.04%, 100=2.60%, 250=97.36% 00:14:05.688 cpu : usr=0.21%, sys=1.96%, ctx=1295, majf=0, minf=4097 00:14:05.688 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:05.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:05.688 issued rwts: total=5183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.688 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:05.688 job9: (groupid=0, jobs=1): err= 0: pid=78757: Wed Nov 20 15:04:34 2024 00:14:05.688 read: IOPS=1145, BW=286MiB/s (300MB/s)(2869MiB/10018msec) 00:14:05.688 slat (usec): min=17, max=59373, avg=859.15, stdev=2045.75 00:14:05.688 clat (msec): min=4, max=156, avg=54.92, stdev=13.08 00:14:05.688 lat (msec): min=4, max=163, avg=55.78, stdev=13.25 00:14:05.688 clat percentiles (msec): 00:14:05.688 | 1.00th=[ 29], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 51], 00:14:05.688 | 30.00th=[ 55], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:14:05.688 | 70.00th=[ 61], 80.00th=[ 62], 90.00th=[ 65], 95.00th=[ 67], 00:14:05.688 | 99.00th=[ 86], 99.50th=[ 116], 99.90th=[ 146], 99.95th=[ 157], 00:14:05.688 | 99.99th=[ 157] 00:14:05.688 bw ( KiB/s): min=246802, max=478720, per=13.90%, avg=291984.65, stdev=54534.39, samples=20 00:14:05.688 iops : min= 964, max= 1870, avg=1140.25, stdev=213.11, samples=20 00:14:05.688 lat (msec) : 10=0.19%, 20=0.11%, 50=19.52%, 100=79.43%, 250=0.75% 00:14:05.688 cpu : usr=0.51%, sys=4.07%, ctx=2523, majf=0, minf=4097 00:14:05.688 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:14:05.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:05.688 issued rwts: total=11477,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.688 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:05.688 job10: (groupid=0, jobs=1): err= 0: pid=78758: Wed Nov 20 15:04:34 2024 00:14:05.688 read: IOPS=511, BW=128MiB/s (134MB/s)(1290MiB/10093msec) 00:14:05.688 slat (usec): min=18, max=40509, avg=1936.45, stdev=4266.16 00:14:05.688 clat (msec): min=36, max=209, avg=123.02, stdev=12.28 00:14:05.688 lat (msec): min=37, max=209, avg=124.95, stdev=12.47 00:14:05.688 clat percentiles (msec): 00:14:05.688 | 1.00th=[ 95], 5.00th=[ 107], 10.00th=[ 111], 20.00th=[ 114], 00:14:05.688 | 30.00th=[ 117], 40.00th=[ 121], 50.00th=[ 124], 60.00th=[ 126], 00:14:05.688 | 70.00th=[ 128], 80.00th=[ 131], 90.00th=[ 138], 95.00th=[ 142], 00:14:05.688 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 199], 99.95th=[ 199], 00:14:05.688 | 99.99th=[ 209] 00:14:05.688 bw ( KiB/s): min=115200, max=141540, per=6.21%, avg=130367.20, stdev=7452.97, samples=20 00:14:05.688 iops : min= 450, max= 552, avg=509.00, stdev=29.03, samples=20 00:14:05.688 lat (msec) : 50=0.08%, 100=2.36%, 250=97.56% 00:14:05.688 cpu : usr=0.28%, sys=1.90%, ctx=1233, majf=0, minf=4097 00:14:05.688 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:14:05.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:05.688 issued rwts: total=5160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.688 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:05.688 00:14:05.688 Run status group 0 (all jobs): 00:14:05.688 READ: bw=2052MiB/s (2151MB/s), 128MiB/s-486MiB/s (134MB/s-509MB/s), io=20.2GiB (21.7GB), run=10009-10094msec 00:14:05.688 00:14:05.688 Disk stats (read/write): 00:14:05.688 nvme0n1: ios=10606/0, merge=0/0, ticks=1226807/0, in_queue=1226807, util=97.77% 00:14:05.688 nvme10n1: ios=37862/0, merge=0/0, ticks=1212208/0, in_queue=1212208, util=98.01% 00:14:05.688 nvme1n1: ios=10557/0, merge=0/0, ticks=1228571/0, in_queue=1228571, util=98.21% 00:14:05.688 nvme2n1: ios=10525/0, merge=0/0, ticks=1226875/0, in_queue=1226875, util=98.38% 00:14:05.688 nvme3n1: ios=19135/0, merge=0/0, ticks=1207214/0, in_queue=1207214, util=98.34% 00:14:05.688 nvme4n1: ios=10510/0, merge=0/0, ticks=1227999/0, in_queue=1227999, util=98.64% 00:14:05.688 nvme5n1: ios=10303/0, merge=0/0, ticks=1226322/0, in_queue=1226322, util=98.60% 00:14:05.688 nvme6n1: ios=10189/0, merge=0/0, ticks=1227989/0, in_queue=1227989, util=98.70% 00:14:05.688 nvme7n1: ios=10246/0, merge=0/0, ticks=1227734/0, in_queue=1227734, util=99.04% 00:14:05.688 nvme8n1: ios=22876/0, merge=0/0, ticks=1239631/0, in_queue=1239631, util=99.06% 00:14:05.688 nvme9n1: ios=10198/0, merge=0/0, ticks=1228025/0, in_queue=1228025, util=99.09% 00:14:05.689 15:04:34 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:14:05.689 [global] 00:14:05.689 thread=1 00:14:05.689 invalidate=1 00:14:05.689 rw=randwrite 00:14:05.689 time_based=1 00:14:05.689 runtime=10 00:14:05.689 ioengine=libaio 00:14:05.689 direct=1 00:14:05.689 bs=262144 00:14:05.689 iodepth=64 00:14:05.689 norandommap=1 00:14:05.689 numjobs=1 00:14:05.689 00:14:05.689 [job0] 00:14:05.689 filename=/dev/nvme0n1 00:14:05.689 [job1] 00:14:05.689 filename=/dev/nvme10n1 00:14:05.689 [job2] 00:14:05.689 filename=/dev/nvme1n1 00:14:05.689 [job3] 00:14:05.689 filename=/dev/nvme2n1 00:14:05.689 [job4] 00:14:05.689 filename=/dev/nvme3n1 00:14:05.689 [job5] 00:14:05.689 filename=/dev/nvme4n1 00:14:05.689 [job6] 00:14:05.689 filename=/dev/nvme5n1 00:14:05.689 [job7] 00:14:05.689 filename=/dev/nvme6n1 00:14:05.689 [job8] 00:14:05.689 filename=/dev/nvme7n1 00:14:05.689 [job9] 00:14:05.689 filename=/dev/nvme8n1 00:14:05.689 [job10] 00:14:05.689 filename=/dev/nvme9n1 00:14:05.689 Could not set queue depth (nvme0n1) 00:14:05.689 Could not set queue depth (nvme10n1) 00:14:05.689 Could not set queue depth (nvme1n1) 00:14:05.689 Could not set queue depth (nvme2n1) 00:14:05.689 Could not set queue depth (nvme3n1) 00:14:05.689 Could not set queue depth (nvme4n1) 00:14:05.689 Could not set queue depth (nvme5n1) 00:14:05.689 Could not set queue depth (nvme6n1) 00:14:05.689 Could not set queue depth (nvme7n1) 00:14:05.689 Could not set queue depth (nvme8n1) 00:14:05.689 Could not set queue depth (nvme9n1) 00:14:05.689 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:05.689 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:05.689 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:05.689 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:05.689 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:05.689 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:05.689 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:05.689 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:05.689 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:05.689 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:05.689 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:14:05.689 fio-3.35 00:14:05.689 Starting 11 threads 00:14:15.701 00:14:15.701 job0: (groupid=0, jobs=1): err= 0: pid=78957: Wed Nov 20 15:04:45 2024 00:14:15.701 write: IOPS=440, BW=110MiB/s (115MB/s)(1128MiB/10252msec); 0 zone resets 00:14:15.701 slat (usec): min=16, max=27068, avg=2212.81, stdev=4021.65 00:14:15.701 clat (msec): min=14, max=513, avg=143.09, stdev=50.66 00:14:15.701 lat (msec): min=14, max=513, avg=145.30, stdev=51.15 00:14:15.701 clat percentiles (msec): 00:14:15.701 | 1.00th=[ 69], 5.00th=[ 89], 10.00th=[ 102], 20.00th=[ 114], 00:14:15.701 | 30.00th=[ 117], 40.00th=[ 122], 50.00th=[ 144], 60.00th=[ 150], 00:14:15.701 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 176], 95.00th=[ 197], 00:14:15.701 | 99.00th=[ 380], 99.50th=[ 414], 99.90th=[ 498], 99.95th=[ 498], 00:14:15.701 | 99.99th=[ 514] 00:14:15.701 bw ( KiB/s): min=55296, max=166400, per=8.05%, avg=113903.85, stdev=28126.83, samples=20 00:14:15.701 iops : min= 216, max= 650, avg=444.90, stdev=109.89, samples=20 00:14:15.701 lat (msec) : 20=0.18%, 50=0.53%, 100=9.22%, 250=85.86%, 500=4.17% 00:14:15.701 lat (msec) : 750=0.04% 00:14:15.701 cpu : usr=0.75%, sys=1.11%, ctx=5609, majf=0, minf=1 00:14:15.701 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:14:15.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:15.701 issued rwts: total=0,4513,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.701 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:15.701 job1: (groupid=0, jobs=1): err= 0: pid=78962: Wed Nov 20 15:04:45 2024 00:14:15.701 write: IOPS=439, BW=110MiB/s (115MB/s)(1127MiB/10254msec); 0 zone resets 00:14:15.701 slat (usec): min=17, max=26964, avg=2199.15, stdev=4038.89 00:14:15.701 clat (msec): min=27, max=509, avg=143.29, stdev=50.19 00:14:15.701 lat (msec): min=27, max=510, avg=145.49, stdev=50.68 00:14:15.701 clat percentiles (msec): 00:14:15.701 | 1.00th=[ 78], 5.00th=[ 89], 10.00th=[ 103], 20.00th=[ 115], 00:14:15.701 | 30.00th=[ 117], 40.00th=[ 121], 50.00th=[ 144], 60.00th=[ 150], 00:14:15.701 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 176], 95.00th=[ 199], 00:14:15.701 | 99.00th=[ 376], 99.50th=[ 409], 99.90th=[ 493], 99.95th=[ 493], 00:14:15.701 | 99.99th=[ 510] 00:14:15.701 bw ( KiB/s): min=55296, max=168448, per=8.04%, avg=113755.95, stdev=27932.04, samples=20 00:14:15.701 iops : min= 216, max= 658, avg=444.35, stdev=109.11, samples=20 00:14:15.701 lat (msec) : 50=0.44%, 100=9.21%, 250=86.14%, 500=4.17%, 750=0.04% 00:14:15.701 cpu : usr=0.77%, sys=1.34%, ctx=5140, majf=0, minf=1 00:14:15.701 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:14:15.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:15.701 issued rwts: total=0,4508,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.701 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:15.701 job2: (groupid=0, jobs=1): err= 0: pid=78970: Wed Nov 20 15:04:45 2024 00:14:15.701 write: IOPS=970, BW=243MiB/s (254MB/s)(2439MiB/10050msec); 0 zone resets 00:14:15.701 slat (usec): min=15, max=27114, avg=1020.25, stdev=1814.83 00:14:15.701 clat (msec): min=29, max=136, avg=64.89, stdev=18.47 00:14:15.701 lat (msec): min=29, max=136, avg=65.91, stdev=18.70 00:14:15.701 clat percentiles (msec): 00:14:15.701 | 1.00th=[ 52], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 54], 00:14:15.701 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 56], 60.00th=[ 57], 00:14:15.701 | 70.00th=[ 62], 80.00th=[ 84], 90.00th=[ 92], 95.00th=[ 102], 00:14:15.701 | 99.00th=[ 127], 99.50th=[ 130], 99.90th=[ 136], 99.95th=[ 138], 00:14:15.701 | 99.99th=[ 138] 00:14:15.701 bw ( KiB/s): min=126976, max=299520, per=17.53%, avg=248029.05, stdev=59660.12, samples=20 00:14:15.701 iops : min= 496, max= 1170, avg=968.80, stdev=233.01, samples=20 00:14:15.701 lat (msec) : 50=0.19%, 100=94.53%, 250=5.28% 00:14:15.701 cpu : usr=1.55%, sys=2.26%, ctx=13079, majf=0, minf=1 00:14:15.701 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:14:15.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.701 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:15.701 issued rwts: total=0,9755,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.701 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:15.701 job3: (groupid=0, jobs=1): err= 0: pid=78971: Wed Nov 20 15:04:45 2024 00:14:15.701 write: IOPS=351, BW=87.8MiB/s (92.0MB/s)(892MiB/10158msec); 0 zone resets 00:14:15.701 slat (usec): min=17, max=19512, avg=2727.99, stdev=4872.47 00:14:15.701 clat (msec): min=35, max=324, avg=179.47, stdev=30.07 00:14:15.701 lat (msec): min=35, max=324, avg=182.20, stdev=30.34 00:14:15.701 clat percentiles (msec): 00:14:15.701 | 1.00th=[ 61], 5.00th=[ 111], 10.00th=[ 148], 20.00th=[ 174], 00:14:15.701 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 188], 00:14:15.701 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 201], 95.00th=[ 215], 00:14:15.701 | 99.00th=[ 232], 99.50th=[ 279], 99.90th=[ 313], 99.95th=[ 326], 00:14:15.701 | 99.99th=[ 326] 00:14:15.701 bw ( KiB/s): min=71680, max=139776, per=6.34%, avg=89659.15, stdev=13230.89, samples=20 00:14:15.701 iops : min= 280, max= 546, avg=350.20, stdev=51.69, samples=20 00:14:15.701 lat (msec) : 50=0.31%, 100=3.76%, 250=95.20%, 500=0.73% 00:14:15.701 cpu : usr=0.60%, sys=0.96%, ctx=4568, majf=0, minf=1 00:14:15.701 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:14:15.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:15.702 issued rwts: total=0,3566,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.702 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:15.702 job4: (groupid=0, jobs=1): err= 0: pid=78972: Wed Nov 20 15:04:45 2024 00:14:15.702 write: IOPS=462, BW=116MiB/s (121MB/s)(1187MiB/10253msec); 0 zone resets 00:14:15.702 slat (usec): min=17, max=70467, avg=2089.03, stdev=4017.64 00:14:15.702 clat (msec): min=17, max=510, avg=136.03, stdev=53.85 00:14:15.702 lat (msec): min=19, max=510, avg=138.12, stdev=54.42 00:14:15.702 clat percentiles (msec): 00:14:15.702 | 1.00th=[ 82], 5.00th=[ 84], 10.00th=[ 88], 20.00th=[ 89], 00:14:15.702 | 30.00th=[ 93], 40.00th=[ 124], 50.00th=[ 142], 60.00th=[ 150], 00:14:15.702 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 176], 95.00th=[ 194], 00:14:15.702 | 99.00th=[ 376], 99.50th=[ 409], 99.90th=[ 493], 99.95th=[ 493], 00:14:15.702 | 99.99th=[ 510] 00:14:15.702 bw ( KiB/s): min=55296, max=186368, per=8.47%, avg=119874.35, stdev=37758.75, samples=20 00:14:15.702 iops : min= 216, max= 728, avg=468.25, stdev=147.50, samples=20 00:14:15.702 lat (msec) : 20=0.04%, 50=0.44%, 100=33.94%, 250=61.58%, 500=3.96% 00:14:15.702 lat (msec) : 750=0.04% 00:14:15.702 cpu : usr=0.68%, sys=1.37%, ctx=5187, majf=0, minf=1 00:14:15.702 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:14:15.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:15.702 issued rwts: total=0,4747,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.702 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:15.702 job5: (groupid=0, jobs=1): err= 0: pid=78973: Wed Nov 20 15:04:45 2024 00:14:15.702 write: IOPS=340, BW=85.1MiB/s (89.2MB/s)(866MiB/10169msec); 0 zone resets 00:14:15.702 slat (usec): min=15, max=33379, avg=2884.89, stdev=5075.39 00:14:15.702 clat (msec): min=11, max=337, avg=185.02, stdev=26.18 00:14:15.702 lat (msec): min=11, max=337, avg=187.90, stdev=26.09 00:14:15.702 clat percentiles (msec): 00:14:15.702 | 1.00th=[ 64], 5.00th=[ 148], 10.00th=[ 165], 20.00th=[ 178], 00:14:15.702 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 190], 00:14:15.702 | 70.00th=[ 192], 80.00th=[ 197], 90.00th=[ 207], 95.00th=[ 215], 00:14:15.702 | 99.00th=[ 239], 99.50th=[ 292], 99.90th=[ 326], 99.95th=[ 338], 00:14:15.702 | 99.99th=[ 338] 00:14:15.702 bw ( KiB/s): min=73728, max=112640, per=6.15%, avg=87005.60, stdev=7381.98, samples=20 00:14:15.702 iops : min= 288, max= 440, avg=339.85, stdev=28.83, samples=20 00:14:15.702 lat (msec) : 20=0.23%, 50=0.58%, 100=0.81%, 250=97.52%, 500=0.87% 00:14:15.702 cpu : usr=0.62%, sys=1.05%, ctx=3895, majf=0, minf=1 00:14:15.702 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:14:15.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:15.702 issued rwts: total=0,3462,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.702 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:15.702 job6: (groupid=0, jobs=1): err= 0: pid=78974: Wed Nov 20 15:04:45 2024 00:14:15.702 write: IOPS=344, BW=86.2MiB/s (90.3MB/s)(876MiB/10168msec); 0 zone resets 00:14:15.702 slat (usec): min=18, max=23687, avg=2849.57, stdev=5001.71 00:14:15.702 clat (msec): min=8, max=343, avg=182.78, stdev=27.96 00:14:15.702 lat (msec): min=8, max=343, avg=185.63, stdev=27.94 00:14:15.702 clat percentiles (msec): 00:14:15.702 | 1.00th=[ 60], 5.00th=[ 140], 10.00th=[ 169], 20.00th=[ 176], 00:14:15.702 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 188], 00:14:15.702 | 70.00th=[ 190], 80.00th=[ 192], 90.00th=[ 203], 95.00th=[ 215], 00:14:15.702 | 99.00th=[ 243], 99.50th=[ 300], 99.90th=[ 334], 99.95th=[ 342], 00:14:15.702 | 99.99th=[ 342] 00:14:15.702 bw ( KiB/s): min=75776, max=120832, per=6.22%, avg=88080.80, stdev=8859.61, samples=20 00:14:15.702 iops : min= 296, max= 472, avg=344.05, stdev=34.61, samples=20 00:14:15.702 lat (msec) : 10=0.11%, 20=0.11%, 50=0.57%, 100=1.23%, 250=97.00% 00:14:15.702 lat (msec) : 500=0.97% 00:14:15.702 cpu : usr=0.51%, sys=1.02%, ctx=2739, majf=0, minf=1 00:14:15.702 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:14:15.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:15.702 issued rwts: total=0,3504,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.702 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:15.702 job7: (groupid=0, jobs=1): err= 0: pid=78975: Wed Nov 20 15:04:45 2024 00:14:15.702 write: IOPS=351, BW=87.8MiB/s (92.1MB/s)(893MiB/10168msec); 0 zone resets 00:14:15.702 slat (usec): min=18, max=47504, avg=2763.74, stdev=4914.69 00:14:15.702 clat (msec): min=15, max=337, avg=179.28, stdev=31.27 00:14:15.702 lat (msec): min=15, max=337, avg=182.04, stdev=31.43 00:14:15.702 clat percentiles (msec): 00:14:15.702 | 1.00th=[ 53], 5.00th=[ 121], 10.00th=[ 127], 20.00th=[ 171], 00:14:15.702 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 188], 00:14:15.702 | 70.00th=[ 192], 80.00th=[ 194], 90.00th=[ 203], 95.00th=[ 215], 00:14:15.702 | 99.00th=[ 239], 99.50th=[ 292], 99.90th=[ 326], 99.95th=[ 338], 00:14:15.702 | 99.99th=[ 338] 00:14:15.702 bw ( KiB/s): min=73580, max=128000, per=6.35%, avg=89836.10, stdev=13163.27, samples=20 00:14:15.702 iops : min= 287, max= 500, avg=350.90, stdev=51.44, samples=20 00:14:15.702 lat (msec) : 20=0.22%, 50=0.67%, 100=0.90%, 250=97.37%, 500=0.84% 00:14:15.702 cpu : usr=0.41%, sys=1.10%, ctx=4939, majf=0, minf=1 00:14:15.702 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:14:15.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:15.702 issued rwts: total=0,3573,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.702 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:15.702 job8: (groupid=0, jobs=1): err= 0: pid=78976: Wed Nov 20 15:04:45 2024 00:14:15.702 write: IOPS=345, BW=86.3MiB/s (90.5MB/s)(876MiB/10154msec); 0 zone resets 00:14:15.702 slat (usec): min=18, max=27352, avg=2851.21, stdev=4982.97 00:14:15.702 clat (msec): min=11, max=322, avg=182.52, stdev=25.25 00:14:15.702 lat (msec): min=11, max=322, avg=185.37, stdev=25.16 00:14:15.702 clat percentiles (msec): 00:14:15.702 | 1.00th=[ 64], 5.00th=[ 148], 10.00th=[ 163], 20.00th=[ 176], 00:14:15.702 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 188], 00:14:15.702 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 201], 95.00th=[ 215], 00:14:15.702 | 99.00th=[ 232], 99.50th=[ 279], 99.90th=[ 313], 99.95th=[ 321], 00:14:15.702 | 99.99th=[ 321] 00:14:15.702 bw ( KiB/s): min=71680, max=112640, per=6.22%, avg=88064.00, stdev=7861.98, samples=20 00:14:15.702 iops : min= 280, max= 440, avg=344.00, stdev=30.71, samples=20 00:14:15.702 lat (msec) : 20=0.23%, 50=0.46%, 100=0.97%, 250=97.60%, 500=0.74% 00:14:15.702 cpu : usr=0.62%, sys=0.99%, ctx=3155, majf=0, minf=1 00:14:15.702 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:14:15.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:15.702 issued rwts: total=0,3504,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.702 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:15.702 job9: (groupid=0, jobs=1): err= 0: pid=78982: Wed Nov 20 15:04:45 2024 00:14:15.702 write: IOPS=444, BW=111MiB/s (117MB/s)(1140MiB/10244msec); 0 zone resets 00:14:15.702 slat (usec): min=19, max=38646, avg=2170.73, stdev=4018.38 00:14:15.702 clat (msec): min=14, max=511, avg=141.59, stdev=51.39 00:14:15.702 lat (msec): min=14, max=511, avg=143.76, stdev=51.92 00:14:15.702 clat percentiles (msec): 00:14:15.702 | 1.00th=[ 51], 5.00th=[ 88], 10.00th=[ 94], 20.00th=[ 111], 00:14:15.702 | 30.00th=[ 117], 40.00th=[ 120], 50.00th=[ 144], 60.00th=[ 150], 00:14:15.702 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 176], 95.00th=[ 197], 00:14:15.702 | 99.00th=[ 380], 99.50th=[ 414], 99.90th=[ 498], 99.95th=[ 498], 00:14:15.702 | 99.99th=[ 510] 00:14:15.702 bw ( KiB/s): min=55808, max=184320, per=8.13%, avg=115043.15, stdev=30840.05, samples=20 00:14:15.702 iops : min= 218, max= 720, avg=449.35, stdev=120.49, samples=20 00:14:15.702 lat (msec) : 20=0.09%, 50=0.90%, 100=12.97%, 250=81.90%, 500=4.10% 00:14:15.702 lat (msec) : 750=0.04% 00:14:15.702 cpu : usr=0.76%, sys=1.11%, ctx=7020, majf=0, minf=1 00:14:15.702 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:14:15.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:15.702 issued rwts: total=0,4558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.702 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:15.702 job10: (groupid=0, jobs=1): err= 0: pid=78984: Wed Nov 20 15:04:45 2024 00:14:15.703 write: IOPS=1091, BW=273MiB/s (286MB/s)(2748MiB/10066msec); 0 zone resets 00:14:15.703 slat (usec): min=15, max=9176, avg=904.88, stdev=1573.91 00:14:15.703 clat (msec): min=11, max=165, avg=57.69, stdev=14.13 00:14:15.703 lat (msec): min=12, max=165, avg=58.59, stdev=14.27 00:14:15.703 clat percentiles (msec): 00:14:15.703 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 50], 20.00th=[ 52], 00:14:15.703 | 30.00th=[ 52], 40.00th=[ 53], 50.00th=[ 54], 60.00th=[ 55], 00:14:15.703 | 70.00th=[ 56], 80.00th=[ 60], 90.00th=[ 66], 95.00th=[ 93], 00:14:15.703 | 99.00th=[ 122], 99.50th=[ 129], 99.90th=[ 144], 99.95th=[ 159], 00:14:15.703 | 99.99th=[ 165] 00:14:15.703 bw ( KiB/s): min=186741, max=315392, per=19.77%, avg=279745.30, stdev=43753.85, samples=20 00:14:15.703 iops : min= 729, max= 1232, avg=1092.70, stdev=170.95, samples=20 00:14:15.703 lat (msec) : 20=0.11%, 50=13.19%, 100=83.79%, 250=2.91% 00:14:15.703 cpu : usr=1.54%, sys=2.90%, ctx=13549, majf=0, minf=1 00:14:15.703 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:14:15.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:15.703 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:14:15.703 issued rwts: total=0,10991,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:15.703 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:15.703 00:14:15.703 Run status group 0 (all jobs): 00:14:15.703 WRITE: bw=1382MiB/s (1449MB/s), 85.1MiB/s-273MiB/s (89.2MB/s-286MB/s), io=13.8GiB (14.9GB), run=10050-10254msec 00:14:15.703 00:14:15.703 Disk stats (read/write): 00:14:15.703 nvme0n1: ios=49/8998, merge=0/0, ticks=52/1232280, in_queue=1232332, util=97.81% 00:14:15.703 nvme10n1: ios=49/8986, merge=0/0, ticks=51/1232585, in_queue=1232636, util=97.81% 00:14:15.703 nvme1n1: ios=32/19256, merge=0/0, ticks=37/1212716, in_queue=1212753, util=97.97% 00:14:15.703 nvme2n1: ios=0/6953, merge=0/0, ticks=0/1204800, in_queue=1204800, util=97.74% 00:14:15.703 nvme3n1: ios=0/9464, merge=0/0, ticks=0/1232913, in_queue=1232913, util=98.06% 00:14:15.703 nvme4n1: ios=0/6761, merge=0/0, ticks=0/1206275, in_queue=1206275, util=98.25% 00:14:15.703 nvme5n1: ios=0/6856, merge=0/0, ticks=0/1208824, in_queue=1208824, util=98.49% 00:14:15.703 nvme6n1: ios=0/6982, merge=0/0, ticks=0/1206799, in_queue=1206799, util=98.44% 00:14:15.703 nvme7n1: ios=0/6828, merge=0/0, ticks=0/1204132, in_queue=1204132, util=98.49% 00:14:15.703 nvme8n1: ios=0/9090, merge=0/0, ticks=0/1232244, in_queue=1232244, util=98.87% 00:14:15.703 nvme9n1: ios=0/21783, merge=0/0, ticks=0/1208767, in_queue=1208767, util=98.93% 00:14:15.703 15:04:45 -- target/multiconnection.sh@36 -- # sync 00:14:15.703 15:04:45 -- target/multiconnection.sh@37 -- # seq 1 11 00:14:15.703 15:04:45 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:15.703 15:04:45 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:15.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.703 15:04:45 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:14:15.703 15:04:45 -- common/autotest_common.sh@1208 -- # local i=0 00:14:15.703 15:04:45 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:15.703 15:04:45 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:14:15.703 15:04:45 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:15.703 15:04:45 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:14:15.703 15:04:45 -- common/autotest_common.sh@1220 -- # return 0 00:14:15.703 15:04:45 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:15.703 15:04:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.703 15:04:45 -- common/autotest_common.sh@10 -- # set +x 00:14:15.703 15:04:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.703 15:04:45 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:15.703 15:04:45 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:14:15.703 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:14:15.703 15:04:45 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:14:15.703 15:04:45 -- common/autotest_common.sh@1208 -- # local i=0 00:14:15.703 15:04:45 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:15.703 15:04:45 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:14:15.703 15:04:45 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:15.703 15:04:45 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:14:15.703 15:04:45 -- common/autotest_common.sh@1220 -- # return 0 00:14:15.703 15:04:45 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:15.703 15:04:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.703 15:04:45 -- common/autotest_common.sh@10 -- # set +x 00:14:15.703 15:04:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.703 15:04:45 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:15.703 15:04:45 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:14:15.703 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:14:15.703 15:04:45 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:14:15.703 15:04:45 -- common/autotest_common.sh@1208 -- # local i=0 00:14:15.703 15:04:45 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:14:15.703 15:04:45 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:15.703 15:04:45 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:15.703 15:04:45 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:14:15.703 15:04:45 -- common/autotest_common.sh@1220 -- # return 0 00:14:15.703 15:04:45 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:15.703 15:04:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.703 15:04:45 -- common/autotest_common.sh@10 -- # set +x 00:14:15.703 15:04:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.703 15:04:45 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:15.703 15:04:45 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:14:15.703 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:14:15.703 15:04:45 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:14:15.703 15:04:45 -- common/autotest_common.sh@1208 -- # local i=0 00:14:15.703 15:04:45 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:15.703 15:04:45 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:14:15.703 15:04:45 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:14:15.703 15:04:45 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:15.703 15:04:45 -- common/autotest_common.sh@1220 -- # return 0 00:14:15.703 15:04:45 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:15.703 15:04:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.703 15:04:45 -- common/autotest_common.sh@10 -- # set +x 00:14:15.703 15:04:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.703 15:04:45 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:15.703 15:04:45 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:14:15.703 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:14:15.703 15:04:45 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:14:15.703 15:04:45 -- common/autotest_common.sh@1208 -- # local i=0 00:14:15.703 15:04:45 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:15.703 15:04:45 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:14:15.703 15:04:45 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:15.703 15:04:45 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:14:15.703 15:04:46 -- common/autotest_common.sh@1220 -- # return 0 00:14:15.703 15:04:46 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:14:15.703 15:04:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.703 15:04:46 -- common/autotest_common.sh@10 -- # set +x 00:14:15.703 15:04:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.703 15:04:46 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:15.703 15:04:46 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:14:15.703 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:14:15.703 15:04:46 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:14:15.703 15:04:46 -- common/autotest_common.sh@1208 -- # local i=0 00:14:15.703 15:04:46 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:15.703 15:04:46 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:14:15.703 15:04:46 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:15.703 15:04:46 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:14:15.703 15:04:46 -- common/autotest_common.sh@1220 -- # return 0 00:14:15.703 15:04:46 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:14:15.703 15:04:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.703 15:04:46 -- common/autotest_common.sh@10 -- # set +x 00:14:15.703 15:04:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.703 15:04:46 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:15.703 15:04:46 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:14:15.703 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:14:15.704 15:04:46 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:14:15.704 15:04:46 -- common/autotest_common.sh@1208 -- # local i=0 00:14:15.704 15:04:46 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:15.704 15:04:46 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:14:15.704 15:04:46 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:15.704 15:04:46 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:14:15.704 15:04:46 -- common/autotest_common.sh@1220 -- # return 0 00:14:15.704 15:04:46 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:14:15.704 15:04:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.704 15:04:46 -- common/autotest_common.sh@10 -- # set +x 00:14:15.704 15:04:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.704 15:04:46 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:15.704 15:04:46 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:14:15.704 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:14:15.704 15:04:46 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:14:15.704 15:04:46 -- common/autotest_common.sh@1208 -- # local i=0 00:14:15.704 15:04:46 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:15.704 15:04:46 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:14:15.704 15:04:46 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:15.704 15:04:46 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:14:15.704 15:04:46 -- common/autotest_common.sh@1220 -- # return 0 00:14:15.704 15:04:46 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:14:15.704 15:04:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.704 15:04:46 -- common/autotest_common.sh@10 -- # set +x 00:14:15.704 15:04:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.704 15:04:46 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:15.704 15:04:46 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:14:15.704 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:14:15.704 15:04:46 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:14:15.704 15:04:46 -- common/autotest_common.sh@1208 -- # local i=0 00:14:15.704 15:04:46 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:15.704 15:04:46 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:14:15.704 15:04:46 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:15.704 15:04:46 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:14:15.704 15:04:46 -- common/autotest_common.sh@1220 -- # return 0 00:14:15.704 15:04:46 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:14:15.704 15:04:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.704 15:04:46 -- common/autotest_common.sh@10 -- # set +x 00:14:15.704 15:04:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.704 15:04:46 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:15.704 15:04:46 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:14:15.704 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:14:15.704 15:04:46 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:14:15.704 15:04:46 -- common/autotest_common.sh@1208 -- # local i=0 00:14:15.704 15:04:46 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:15.704 15:04:46 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:14:15.704 15:04:46 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:15.704 15:04:46 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:14:15.704 15:04:46 -- common/autotest_common.sh@1220 -- # return 0 00:14:15.704 15:04:46 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:14:15.704 15:04:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.704 15:04:46 -- common/autotest_common.sh@10 -- # set +x 00:14:15.704 15:04:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.704 15:04:46 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:14:15.704 15:04:46 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:14:15.704 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:14:15.704 15:04:46 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:14:15.704 15:04:46 -- common/autotest_common.sh@1208 -- # local i=0 00:14:15.704 15:04:46 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:14:15.704 15:04:46 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:14:15.704 15:04:46 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:14:15.704 15:04:46 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:14:15.704 15:04:46 -- common/autotest_common.sh@1220 -- # return 0 00:14:15.704 15:04:46 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:14:15.704 15:04:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.704 15:04:46 -- common/autotest_common.sh@10 -- # set +x 00:14:15.963 15:04:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.963 15:04:46 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:14:15.963 15:04:46 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:14:15.963 15:04:46 -- target/multiconnection.sh@47 -- # nvmftestfini 00:14:15.963 15:04:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:15.963 15:04:46 -- nvmf/common.sh@116 -- # sync 00:14:15.963 15:04:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:15.963 15:04:46 -- nvmf/common.sh@119 -- # set +e 00:14:15.963 15:04:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:15.963 15:04:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:15.963 rmmod nvme_tcp 00:14:15.963 rmmod nvme_fabrics 00:14:15.963 rmmod nvme_keyring 00:14:15.963 15:04:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:15.963 15:04:46 -- nvmf/common.sh@123 -- # set -e 00:14:15.963 15:04:46 -- nvmf/common.sh@124 -- # return 0 00:14:15.963 15:04:46 -- nvmf/common.sh@477 -- # '[' -n 78296 ']' 00:14:15.963 15:04:46 -- nvmf/common.sh@478 -- # killprocess 78296 00:14:15.963 15:04:46 -- common/autotest_common.sh@936 -- # '[' -z 78296 ']' 00:14:15.963 15:04:46 -- common/autotest_common.sh@940 -- # kill -0 78296 00:14:15.963 15:04:46 -- common/autotest_common.sh@941 -- # uname 00:14:15.963 15:04:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:15.963 15:04:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78296 00:14:15.963 killing process with pid 78296 00:14:15.963 15:04:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:15.963 15:04:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:15.963 15:04:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78296' 00:14:15.963 15:04:46 -- common/autotest_common.sh@955 -- # kill 78296 00:14:15.963 15:04:46 -- common/autotest_common.sh@960 -- # wait 78296 00:14:16.220 15:04:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:16.220 15:04:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:16.220 15:04:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:16.221 15:04:46 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:16.221 15:04:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:16.221 15:04:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.221 15:04:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:16.221 15:04:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.221 15:04:46 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:16.221 00:14:16.221 real 0m48.222s 00:14:16.221 user 2m36.879s 00:14:16.221 sys 0m34.520s 00:14:16.221 ************************************ 00:14:16.221 END TEST nvmf_multiconnection 00:14:16.221 ************************************ 00:14:16.221 15:04:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:16.221 15:04:46 -- common/autotest_common.sh@10 -- # set +x 00:14:16.221 15:04:46 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:14:16.221 15:04:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:16.221 15:04:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:16.221 15:04:46 -- common/autotest_common.sh@10 -- # set +x 00:14:16.221 ************************************ 00:14:16.221 START TEST nvmf_initiator_timeout 00:14:16.221 ************************************ 00:14:16.221 15:04:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:14:16.479 * Looking for test storage... 00:14:16.479 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:16.479 15:04:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:16.479 15:04:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:16.479 15:04:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:16.479 15:04:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:16.479 15:04:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:16.479 15:04:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:16.479 15:04:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:16.479 15:04:47 -- scripts/common.sh@335 -- # IFS=.-: 00:14:16.479 15:04:47 -- scripts/common.sh@335 -- # read -ra ver1 00:14:16.479 15:04:47 -- scripts/common.sh@336 -- # IFS=.-: 00:14:16.479 15:04:47 -- scripts/common.sh@336 -- # read -ra ver2 00:14:16.479 15:04:47 -- scripts/common.sh@337 -- # local 'op=<' 00:14:16.479 15:04:47 -- scripts/common.sh@339 -- # ver1_l=2 00:14:16.479 15:04:47 -- scripts/common.sh@340 -- # ver2_l=1 00:14:16.479 15:04:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:16.479 15:04:47 -- scripts/common.sh@343 -- # case "$op" in 00:14:16.479 15:04:47 -- scripts/common.sh@344 -- # : 1 00:14:16.479 15:04:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:16.479 15:04:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:16.479 15:04:47 -- scripts/common.sh@364 -- # decimal 1 00:14:16.479 15:04:47 -- scripts/common.sh@352 -- # local d=1 00:14:16.479 15:04:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:16.479 15:04:47 -- scripts/common.sh@354 -- # echo 1 00:14:16.479 15:04:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:16.479 15:04:47 -- scripts/common.sh@365 -- # decimal 2 00:14:16.479 15:04:47 -- scripts/common.sh@352 -- # local d=2 00:14:16.479 15:04:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:16.479 15:04:47 -- scripts/common.sh@354 -- # echo 2 00:14:16.479 15:04:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:16.479 15:04:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:16.479 15:04:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:16.479 15:04:47 -- scripts/common.sh@367 -- # return 0 00:14:16.479 15:04:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:16.479 15:04:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:16.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.479 --rc genhtml_branch_coverage=1 00:14:16.479 --rc genhtml_function_coverage=1 00:14:16.479 --rc genhtml_legend=1 00:14:16.479 --rc geninfo_all_blocks=1 00:14:16.479 --rc geninfo_unexecuted_blocks=1 00:14:16.479 00:14:16.479 ' 00:14:16.479 15:04:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:16.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.479 --rc genhtml_branch_coverage=1 00:14:16.479 --rc genhtml_function_coverage=1 00:14:16.479 --rc genhtml_legend=1 00:14:16.479 --rc geninfo_all_blocks=1 00:14:16.479 --rc geninfo_unexecuted_blocks=1 00:14:16.479 00:14:16.479 ' 00:14:16.479 15:04:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:16.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.479 --rc genhtml_branch_coverage=1 00:14:16.479 --rc genhtml_function_coverage=1 00:14:16.479 --rc genhtml_legend=1 00:14:16.479 --rc geninfo_all_blocks=1 00:14:16.479 --rc geninfo_unexecuted_blocks=1 00:14:16.479 00:14:16.479 ' 00:14:16.479 15:04:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:16.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:16.480 --rc genhtml_branch_coverage=1 00:14:16.480 --rc genhtml_function_coverage=1 00:14:16.480 --rc genhtml_legend=1 00:14:16.480 --rc geninfo_all_blocks=1 00:14:16.480 --rc geninfo_unexecuted_blocks=1 00:14:16.480 00:14:16.480 ' 00:14:16.480 15:04:47 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:16.480 15:04:47 -- nvmf/common.sh@7 -- # uname -s 00:14:16.480 15:04:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.480 15:04:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.480 15:04:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.480 15:04:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.480 15:04:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.480 15:04:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.480 15:04:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.480 15:04:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.480 15:04:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.480 15:04:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.480 15:04:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:14:16.480 15:04:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:14:16.480 15:04:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.480 15:04:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.480 15:04:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:16.480 15:04:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:16.480 15:04:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.480 15:04:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.480 15:04:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.480 15:04:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.480 15:04:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.480 15:04:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.480 15:04:47 -- paths/export.sh@5 -- # export PATH 00:14:16.480 15:04:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.480 15:04:47 -- nvmf/common.sh@46 -- # : 0 00:14:16.480 15:04:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:16.480 15:04:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:16.480 15:04:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:16.480 15:04:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.480 15:04:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.480 15:04:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:16.480 15:04:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:16.480 15:04:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:16.480 15:04:47 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:16.480 15:04:47 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:16.480 15:04:47 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:14:16.480 15:04:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:16.480 15:04:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:16.480 15:04:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:16.480 15:04:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:16.480 15:04:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:16.480 15:04:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.480 15:04:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:16.480 15:04:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.480 15:04:47 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:16.480 15:04:47 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:16.480 15:04:47 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:16.480 15:04:47 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:16.480 15:04:47 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:16.480 15:04:47 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:16.480 15:04:47 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:16.480 15:04:47 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:16.480 15:04:47 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:16.480 15:04:47 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:16.480 15:04:47 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:16.480 15:04:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:16.480 15:04:47 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:16.480 15:04:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:16.480 15:04:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:16.480 15:04:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:16.480 15:04:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:16.480 15:04:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:16.480 15:04:47 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:16.480 15:04:47 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:16.480 Cannot find device "nvmf_tgt_br" 00:14:16.480 15:04:47 -- nvmf/common.sh@154 -- # true 00:14:16.480 15:04:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:16.480 Cannot find device "nvmf_tgt_br2" 00:14:16.480 15:04:47 -- nvmf/common.sh@155 -- # true 00:14:16.480 15:04:47 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:16.480 15:04:47 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:16.480 Cannot find device "nvmf_tgt_br" 00:14:16.480 15:04:47 -- nvmf/common.sh@157 -- # true 00:14:16.480 15:04:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:16.480 Cannot find device "nvmf_tgt_br2" 00:14:16.480 15:04:47 -- nvmf/common.sh@158 -- # true 00:14:16.480 15:04:47 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:16.480 15:04:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:16.739 15:04:47 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:16.739 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:16.739 15:04:47 -- nvmf/common.sh@161 -- # true 00:14:16.739 15:04:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:16.739 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:16.739 15:04:47 -- nvmf/common.sh@162 -- # true 00:14:16.739 15:04:47 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:16.739 15:04:47 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:16.739 15:04:47 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:16.739 15:04:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:16.739 15:04:47 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:16.739 15:04:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:16.739 15:04:47 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:16.739 15:04:47 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:16.739 15:04:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:16.739 15:04:47 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:16.739 15:04:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:16.739 15:04:47 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:16.739 15:04:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:16.739 15:04:47 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:16.739 15:04:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:16.739 15:04:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:16.739 15:04:47 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:16.739 15:04:47 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:16.739 15:04:47 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:16.739 15:04:47 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:16.739 15:04:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:16.740 15:04:47 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:16.740 15:04:47 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:16.740 15:04:47 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:16.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:14:16.740 00:14:16.740 --- 10.0.0.2 ping statistics --- 00:14:16.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.740 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:14:16.740 15:04:47 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:16.740 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:16.740 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:14:16.740 00:14:16.740 --- 10.0.0.3 ping statistics --- 00:14:16.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.740 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:14:16.740 15:04:47 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:16.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:16.740 00:14:16.740 --- 10.0.0.1 ping statistics --- 00:14:16.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.740 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:16.740 15:04:47 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.740 15:04:47 -- nvmf/common.sh@421 -- # return 0 00:14:16.740 15:04:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:16.740 15:04:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.740 15:04:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:16.740 15:04:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:16.740 15:04:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.740 15:04:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:16.740 15:04:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:16.740 15:04:47 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:14:16.740 15:04:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:16.740 15:04:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:16.740 15:04:47 -- common/autotest_common.sh@10 -- # set +x 00:14:16.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.740 15:04:47 -- nvmf/common.sh@469 -- # nvmfpid=79356 00:14:16.740 15:04:47 -- nvmf/common.sh@470 -- # waitforlisten 79356 00:14:16.740 15:04:47 -- common/autotest_common.sh@829 -- # '[' -z 79356 ']' 00:14:16.740 15:04:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.740 15:04:47 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:16.740 15:04:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:16.740 15:04:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.740 15:04:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:16.740 15:04:47 -- common/autotest_common.sh@10 -- # set +x 00:14:16.998 [2024-11-20 15:04:47.575191] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:14:16.998 [2024-11-20 15:04:47.575314] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.998 [2024-11-20 15:04:47.719400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:16.998 [2024-11-20 15:04:47.760626] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:16.998 [2024-11-20 15:04:47.760844] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.998 [2024-11-20 15:04:47.760871] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.998 [2024-11-20 15:04:47.760887] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.998 [2024-11-20 15:04:47.761002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.998 [2024-11-20 15:04:47.761786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.998 [2024-11-20 15:04:47.761918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:16.998 [2024-11-20 15:04:47.761937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.931 15:04:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:17.931 15:04:48 -- common/autotest_common.sh@862 -- # return 0 00:14:17.931 15:04:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:17.931 15:04:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:17.931 15:04:48 -- common/autotest_common.sh@10 -- # set +x 00:14:18.189 15:04:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.189 15:04:48 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:18.189 15:04:48 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:18.189 15:04:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.189 15:04:48 -- common/autotest_common.sh@10 -- # set +x 00:14:18.189 Malloc0 00:14:18.189 15:04:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.189 15:04:48 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:14:18.189 15:04:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.189 15:04:48 -- common/autotest_common.sh@10 -- # set +x 00:14:18.189 Delay0 00:14:18.189 15:04:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.189 15:04:48 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:18.189 15:04:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.189 15:04:48 -- common/autotest_common.sh@10 -- # set +x 00:14:18.189 [2024-11-20 15:04:48.792712] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.189 15:04:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.189 15:04:48 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:18.189 15:04:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.189 15:04:48 -- common/autotest_common.sh@10 -- # set +x 00:14:18.189 15:04:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.189 15:04:48 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:18.189 15:04:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.189 15:04:48 -- common/autotest_common.sh@10 -- # set +x 00:14:18.189 15:04:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.189 15:04:48 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:18.189 15:04:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.189 15:04:48 -- common/autotest_common.sh@10 -- # set +x 00:14:18.189 [2024-11-20 15:04:48.820877] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.189 15:04:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.189 15:04:48 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece --hostid=79d23ff5-0d62-4c46-bc89-ee8b51440ece -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:18.189 15:04:48 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:14:18.189 15:04:48 -- common/autotest_common.sh@1187 -- # local i=0 00:14:18.189 15:04:48 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:14:18.189 15:04:48 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:14:18.189 15:04:48 -- common/autotest_common.sh@1194 -- # sleep 2 00:14:20.748 15:04:50 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:14:20.748 15:04:50 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:14:20.748 15:04:50 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:14:20.749 15:04:50 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:14:20.749 15:04:50 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:14:20.749 15:04:50 -- common/autotest_common.sh@1197 -- # return 0 00:14:20.749 15:04:50 -- target/initiator_timeout.sh@35 -- # fio_pid=79420 00:14:20.749 15:04:50 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:14:20.749 15:04:50 -- target/initiator_timeout.sh@37 -- # sleep 3 00:14:20.749 [global] 00:14:20.749 thread=1 00:14:20.749 invalidate=1 00:14:20.749 rw=write 00:14:20.749 time_based=1 00:14:20.749 runtime=60 00:14:20.749 ioengine=libaio 00:14:20.749 direct=1 00:14:20.749 bs=4096 00:14:20.749 iodepth=1 00:14:20.749 norandommap=0 00:14:20.749 numjobs=1 00:14:20.749 00:14:20.749 verify_dump=1 00:14:20.749 verify_backlog=512 00:14:20.749 verify_state_save=0 00:14:20.749 do_verify=1 00:14:20.749 verify=crc32c-intel 00:14:20.749 [job0] 00:14:20.749 filename=/dev/nvme0n1 00:14:20.749 Could not set queue depth (nvme0n1) 00:14:20.749 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:20.749 fio-3.35 00:14:20.749 Starting 1 thread 00:14:23.280 15:04:53 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:14:23.280 15:04:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.280 15:04:53 -- common/autotest_common.sh@10 -- # set +x 00:14:23.280 true 00:14:23.280 15:04:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.280 15:04:53 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:14:23.280 15:04:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.280 15:04:53 -- common/autotest_common.sh@10 -- # set +x 00:14:23.280 true 00:14:23.280 15:04:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.280 15:04:53 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:14:23.280 15:04:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.280 15:04:53 -- common/autotest_common.sh@10 -- # set +x 00:14:23.280 true 00:14:23.280 15:04:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.280 15:04:54 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:14:23.280 15:04:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.281 15:04:54 -- common/autotest_common.sh@10 -- # set +x 00:14:23.281 true 00:14:23.281 15:04:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.281 15:04:54 -- target/initiator_timeout.sh@45 -- # sleep 3 00:14:26.567 15:04:57 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:14:26.567 15:04:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.567 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:14:26.567 true 00:14:26.567 15:04:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.567 15:04:57 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:14:26.567 15:04:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.567 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:14:26.567 true 00:14:26.567 15:04:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.567 15:04:57 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:14:26.567 15:04:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.567 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:14:26.567 true 00:14:26.567 15:04:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.567 15:04:57 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:14:26.567 15:04:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.567 15:04:57 -- common/autotest_common.sh@10 -- # set +x 00:14:26.567 true 00:14:26.567 15:04:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.567 15:04:57 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:14:26.567 15:04:57 -- target/initiator_timeout.sh@54 -- # wait 79420 00:15:22.785 00:15:22.785 job0: (groupid=0, jobs=1): err= 0: pid=79441: Wed Nov 20 15:05:51 2024 00:15:22.785 read: IOPS=742, BW=2970KiB/s (3041kB/s)(174MiB/60000msec) 00:15:22.785 slat (usec): min=10, max=9054, avg=17.95, stdev=56.40 00:15:22.785 clat (usec): min=165, max=40380k, avg=1127.36, stdev=191308.41 00:15:22.785 lat (usec): min=176, max=40380k, avg=1145.31, stdev=191308.41 00:15:22.785 clat percentiles (usec): 00:15:22.785 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 196], 00:15:22.785 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 212], 60.00th=[ 219], 00:15:22.785 | 70.00th=[ 227], 80.00th=[ 237], 90.00th=[ 260], 95.00th=[ 281], 00:15:22.785 | 99.00th=[ 424], 99.50th=[ 510], 99.90th=[ 627], 99.95th=[ 701], 00:15:22.785 | 99.99th=[ 1188] 00:15:22.785 write: IOPS=750, BW=3004KiB/s (3076kB/s)(176MiB/60000msec); 0 zone resets 00:15:22.785 slat (usec): min=13, max=583, avg=25.99, stdev= 9.68 00:15:22.785 clat (usec): min=116, max=3067, avg=169.24, stdev=41.91 00:15:22.785 lat (usec): min=141, max=3115, avg=195.23, stdev=44.94 00:15:22.785 clat percentiles (usec): 00:15:22.785 | 1.00th=[ 129], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 149], 00:15:22.785 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 167], 00:15:22.785 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 198], 95.00th=[ 217], 00:15:22.785 | 99.00th=[ 338], 99.50th=[ 400], 99.90th=[ 545], 99.95th=[ 627], 00:15:22.785 | 99.99th=[ 1221] 00:15:22.785 bw ( KiB/s): min= 4096, max=11488, per=100.00%, avg=9030.36, stdev=1464.31, samples=39 00:15:22.785 iops : min= 1024, max= 2872, avg=2257.59, stdev=366.08, samples=39 00:15:22.785 lat (usec) : 250=92.40%, 500=7.21%, 750=0.34%, 1000=0.02% 00:15:22.785 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:15:22.785 cpu : usr=0.69%, sys=2.55%, ctx=89682, majf=0, minf=5 00:15:22.785 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:22.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.785 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.785 issued rwts: total=44551,45056,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.785 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:22.785 00:15:22.785 Run status group 0 (all jobs): 00:15:22.785 READ: bw=2970KiB/s (3041kB/s), 2970KiB/s-2970KiB/s (3041kB/s-3041kB/s), io=174MiB (182MB), run=60000-60000msec 00:15:22.785 WRITE: bw=3004KiB/s (3076kB/s), 3004KiB/s-3004KiB/s (3076kB/s-3076kB/s), io=176MiB (185MB), run=60000-60000msec 00:15:22.785 00:15:22.785 Disk stats (read/write): 00:15:22.785 nvme0n1: ios=44781/44576, merge=0/0, ticks=10155/8010, in_queue=18165, util=99.52% 00:15:22.785 15:05:51 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:22.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.785 15:05:51 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:22.785 15:05:51 -- common/autotest_common.sh@1208 -- # local i=0 00:15:22.785 15:05:51 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:22.785 15:05:51 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:22.785 15:05:51 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:22.785 15:05:51 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:22.785 nvmf hotplug test: fio successful as expected 00:15:22.785 15:05:51 -- common/autotest_common.sh@1220 -- # return 0 00:15:22.785 15:05:51 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:15:22.785 15:05:51 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:15:22.785 15:05:51 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:22.785 15:05:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.785 15:05:51 -- common/autotest_common.sh@10 -- # set +x 00:15:22.785 15:05:51 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.785 15:05:51 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:15:22.785 15:05:51 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:15:22.785 15:05:51 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:15:22.785 15:05:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:22.785 15:05:51 -- nvmf/common.sh@116 -- # sync 00:15:22.785 15:05:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:22.785 15:05:51 -- nvmf/common.sh@119 -- # set +e 00:15:22.785 15:05:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:22.785 15:05:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:22.785 rmmod nvme_tcp 00:15:22.785 rmmod nvme_fabrics 00:15:22.785 rmmod nvme_keyring 00:15:22.785 15:05:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:22.785 15:05:51 -- nvmf/common.sh@123 -- # set -e 00:15:22.785 15:05:51 -- nvmf/common.sh@124 -- # return 0 00:15:22.785 15:05:51 -- nvmf/common.sh@477 -- # '[' -n 79356 ']' 00:15:22.785 15:05:51 -- nvmf/common.sh@478 -- # killprocess 79356 00:15:22.785 15:05:51 -- common/autotest_common.sh@936 -- # '[' -z 79356 ']' 00:15:22.785 15:05:51 -- common/autotest_common.sh@940 -- # kill -0 79356 00:15:22.785 15:05:51 -- common/autotest_common.sh@941 -- # uname 00:15:22.785 15:05:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:22.785 15:05:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79356 00:15:22.785 killing process with pid 79356 00:15:22.785 15:05:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:22.785 15:05:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:22.785 15:05:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79356' 00:15:22.785 15:05:51 -- common/autotest_common.sh@955 -- # kill 79356 00:15:22.785 15:05:51 -- common/autotest_common.sh@960 -- # wait 79356 00:15:22.785 15:05:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:22.785 15:05:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:22.785 15:05:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:22.785 15:05:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:22.785 15:05:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:22.785 15:05:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.785 15:05:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:22.785 15:05:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.785 15:05:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:22.785 ************************************ 00:15:22.785 END TEST nvmf_initiator_timeout 00:15:22.785 ************************************ 00:15:22.785 00:15:22.785 real 1m4.595s 00:15:22.785 user 3m53.703s 00:15:22.785 sys 0m21.658s 00:15:22.785 15:05:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:22.785 15:05:51 -- common/autotest_common.sh@10 -- # set +x 00:15:22.785 15:05:51 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:15:22.785 15:05:51 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:15:22.785 15:05:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:22.785 15:05:51 -- common/autotest_common.sh@10 -- # set +x 00:15:22.785 15:05:51 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:15:22.785 15:05:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:22.785 15:05:51 -- common/autotest_common.sh@10 -- # set +x 00:15:22.785 15:05:51 -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:15:22.785 15:05:51 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:22.785 15:05:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:22.785 15:05:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:22.785 15:05:51 -- common/autotest_common.sh@10 -- # set +x 00:15:22.785 ************************************ 00:15:22.786 START TEST nvmf_identify 00:15:22.786 ************************************ 00:15:22.786 15:05:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:15:22.786 * Looking for test storage... 00:15:22.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:22.786 15:05:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:22.786 15:05:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:22.786 15:05:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:22.786 15:05:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:22.786 15:05:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:22.786 15:05:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:22.786 15:05:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:22.786 15:05:51 -- scripts/common.sh@335 -- # IFS=.-: 00:15:22.786 15:05:51 -- scripts/common.sh@335 -- # read -ra ver1 00:15:22.786 15:05:51 -- scripts/common.sh@336 -- # IFS=.-: 00:15:22.786 15:05:51 -- scripts/common.sh@336 -- # read -ra ver2 00:15:22.786 15:05:51 -- scripts/common.sh@337 -- # local 'op=<' 00:15:22.786 15:05:51 -- scripts/common.sh@339 -- # ver1_l=2 00:15:22.786 15:05:51 -- scripts/common.sh@340 -- # ver2_l=1 00:15:22.786 15:05:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:22.786 15:05:51 -- scripts/common.sh@343 -- # case "$op" in 00:15:22.786 15:05:51 -- scripts/common.sh@344 -- # : 1 00:15:22.786 15:05:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:22.786 15:05:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:22.786 15:05:51 -- scripts/common.sh@364 -- # decimal 1 00:15:22.786 15:05:51 -- scripts/common.sh@352 -- # local d=1 00:15:22.786 15:05:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:22.786 15:05:51 -- scripts/common.sh@354 -- # echo 1 00:15:22.786 15:05:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:22.786 15:05:51 -- scripts/common.sh@365 -- # decimal 2 00:15:22.786 15:05:51 -- scripts/common.sh@352 -- # local d=2 00:15:22.786 15:05:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:22.786 15:05:51 -- scripts/common.sh@354 -- # echo 2 00:15:22.786 15:05:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:22.786 15:05:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:22.786 15:05:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:22.786 15:05:51 -- scripts/common.sh@367 -- # return 0 00:15:22.786 15:05:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:22.786 15:05:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:22.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.786 --rc genhtml_branch_coverage=1 00:15:22.786 --rc genhtml_function_coverage=1 00:15:22.786 --rc genhtml_legend=1 00:15:22.786 --rc geninfo_all_blocks=1 00:15:22.786 --rc geninfo_unexecuted_blocks=1 00:15:22.786 00:15:22.786 ' 00:15:22.786 15:05:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:22.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.786 --rc genhtml_branch_coverage=1 00:15:22.786 --rc genhtml_function_coverage=1 00:15:22.786 --rc genhtml_legend=1 00:15:22.786 --rc geninfo_all_blocks=1 00:15:22.786 --rc geninfo_unexecuted_blocks=1 00:15:22.786 00:15:22.786 ' 00:15:22.786 15:05:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:22.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.786 --rc genhtml_branch_coverage=1 00:15:22.786 --rc genhtml_function_coverage=1 00:15:22.786 --rc genhtml_legend=1 00:15:22.786 --rc geninfo_all_blocks=1 00:15:22.786 --rc geninfo_unexecuted_blocks=1 00:15:22.786 00:15:22.786 ' 00:15:22.786 15:05:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:22.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.786 --rc genhtml_branch_coverage=1 00:15:22.786 --rc genhtml_function_coverage=1 00:15:22.786 --rc genhtml_legend=1 00:15:22.786 --rc geninfo_all_blocks=1 00:15:22.786 --rc geninfo_unexecuted_blocks=1 00:15:22.786 00:15:22.786 ' 00:15:22.786 15:05:51 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:22.786 15:05:51 -- nvmf/common.sh@7 -- # uname -s 00:15:22.786 15:05:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:22.786 15:05:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:22.786 15:05:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:22.786 15:05:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:22.786 15:05:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:22.786 15:05:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:22.786 15:05:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:22.786 15:05:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:22.786 15:05:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:22.786 15:05:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:22.786 15:05:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:15:22.786 15:05:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:15:22.786 15:05:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:22.786 15:05:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:22.786 15:05:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:22.786 15:05:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:22.786 15:05:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:22.786 15:05:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:22.786 15:05:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:22.786 15:05:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.786 15:05:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.786 15:05:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.786 15:05:51 -- paths/export.sh@5 -- # export PATH 00:15:22.786 15:05:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.786 15:05:51 -- nvmf/common.sh@46 -- # : 0 00:15:22.786 15:05:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:22.786 15:05:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:22.786 15:05:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:22.786 15:05:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:22.786 15:05:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:22.786 15:05:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:22.786 15:05:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:22.786 15:05:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:22.786 15:05:51 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:22.786 15:05:51 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:22.786 15:05:51 -- host/identify.sh@14 -- # nvmftestinit 00:15:22.786 15:05:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:22.786 15:05:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:22.786 15:05:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:22.786 15:05:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:22.786 15:05:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:22.786 15:05:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.786 15:05:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:22.786 15:05:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.786 15:05:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:22.786 15:05:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:22.786 15:05:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:22.786 15:05:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:22.787 15:05:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:22.787 15:05:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:22.787 15:05:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:22.787 15:05:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:22.787 15:05:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:22.787 15:05:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:22.787 15:05:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:22.787 15:05:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:22.787 15:05:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:22.787 15:05:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:22.787 15:05:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:22.787 15:05:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:22.787 15:05:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:22.787 15:05:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:22.787 15:05:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:22.787 15:05:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:22.787 Cannot find device "nvmf_tgt_br" 00:15:22.787 15:05:51 -- nvmf/common.sh@154 -- # true 00:15:22.787 15:05:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:22.787 Cannot find device "nvmf_tgt_br2" 00:15:22.787 15:05:51 -- nvmf/common.sh@155 -- # true 00:15:22.787 15:05:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:22.787 15:05:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:22.787 Cannot find device "nvmf_tgt_br" 00:15:22.787 15:05:51 -- nvmf/common.sh@157 -- # true 00:15:22.787 15:05:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:22.787 Cannot find device "nvmf_tgt_br2" 00:15:22.787 15:05:51 -- nvmf/common.sh@158 -- # true 00:15:22.787 15:05:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:22.787 15:05:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:22.787 15:05:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:22.787 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:22.787 15:05:52 -- nvmf/common.sh@161 -- # true 00:15:22.787 15:05:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:22.787 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:22.787 15:05:52 -- nvmf/common.sh@162 -- # true 00:15:22.787 15:05:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:22.787 15:05:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:22.787 15:05:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:22.787 15:05:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:22.787 15:05:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:22.787 15:05:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:22.787 15:05:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:22.787 15:05:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:22.787 15:05:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:22.787 15:05:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:22.787 15:05:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:22.787 15:05:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:22.787 15:05:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:22.787 15:05:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:22.787 15:05:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:22.787 15:05:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:22.787 15:05:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:22.787 15:05:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:22.787 15:05:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:22.787 15:05:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:22.787 15:05:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:22.787 15:05:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:22.787 15:05:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:22.787 15:05:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:22.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:22.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:15:22.787 00:15:22.787 --- 10.0.0.2 ping statistics --- 00:15:22.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.787 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:15:22.787 15:05:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:22.787 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:22.787 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:15:22.787 00:15:22.787 --- 10.0.0.3 ping statistics --- 00:15:22.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.787 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:15:22.787 15:05:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:22.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:22.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:22.787 00:15:22.787 --- 10.0.0.1 ping statistics --- 00:15:22.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.787 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:22.787 15:05:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:22.787 15:05:52 -- nvmf/common.sh@421 -- # return 0 00:15:22.787 15:05:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:22.787 15:05:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:22.787 15:05:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:22.787 15:05:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:22.787 15:05:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:22.787 15:05:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:22.787 15:05:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:22.787 15:05:52 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:15:22.787 15:05:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:22.787 15:05:52 -- common/autotest_common.sh@10 -- # set +x 00:15:22.787 15:05:52 -- host/identify.sh@19 -- # nvmfpid=80288 00:15:22.787 15:05:52 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:22.787 15:05:52 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:22.787 15:05:52 -- host/identify.sh@23 -- # waitforlisten 80288 00:15:22.787 15:05:52 -- common/autotest_common.sh@829 -- # '[' -z 80288 ']' 00:15:22.787 15:05:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.787 15:05:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:22.787 15:05:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.787 15:05:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:22.787 15:05:52 -- common/autotest_common.sh@10 -- # set +x 00:15:22.787 [2024-11-20 15:05:52.270993] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:22.787 [2024-11-20 15:05:52.271153] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.787 [2024-11-20 15:05:52.436347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:22.787 [2024-11-20 15:05:52.482171] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:22.787 [2024-11-20 15:05:52.482486] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.787 [2024-11-20 15:05:52.482513] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.787 [2024-11-20 15:05:52.482527] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.787 [2024-11-20 15:05:52.482660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.787 [2024-11-20 15:05:52.483122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.787 [2024-11-20 15:05:52.483243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:22.787 [2024-11-20 15:05:52.483255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.787 15:05:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:22.787 15:05:53 -- common/autotest_common.sh@862 -- # return 0 00:15:22.787 15:05:53 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:22.787 15:05:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.787 15:05:53 -- common/autotest_common.sh@10 -- # set +x 00:15:22.787 [2024-11-20 15:05:53.353853] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:22.787 15:05:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.787 15:05:53 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:15:22.787 15:05:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:22.787 15:05:53 -- common/autotest_common.sh@10 -- # set +x 00:15:22.788 15:05:53 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:22.788 15:05:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.788 15:05:53 -- common/autotest_common.sh@10 -- # set +x 00:15:22.788 Malloc0 00:15:22.788 15:05:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.788 15:05:53 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:22.788 15:05:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.788 15:05:53 -- common/autotest_common.sh@10 -- # set +x 00:15:22.788 15:05:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.788 15:05:53 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:15:22.788 15:05:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.788 15:05:53 -- common/autotest_common.sh@10 -- # set +x 00:15:22.788 15:05:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.788 15:05:53 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:22.788 15:05:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.788 15:05:53 -- common/autotest_common.sh@10 -- # set +x 00:15:22.788 [2024-11-20 15:05:53.451888] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:22.788 15:05:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.788 15:05:53 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:22.788 15:05:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.788 15:05:53 -- common/autotest_common.sh@10 -- # set +x 00:15:22.788 15:05:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.788 15:05:53 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:15:22.788 15:05:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.788 15:05:53 -- common/autotest_common.sh@10 -- # set +x 00:15:22.788 [2024-11-20 15:05:53.467622] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:15:22.788 [ 00:15:22.788 { 00:15:22.788 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:22.788 "subtype": "Discovery", 00:15:22.788 "listen_addresses": [ 00:15:22.788 { 00:15:22.788 "transport": "TCP", 00:15:22.788 "trtype": "TCP", 00:15:22.788 "adrfam": "IPv4", 00:15:22.788 "traddr": "10.0.0.2", 00:15:22.788 "trsvcid": "4420" 00:15:22.788 } 00:15:22.788 ], 00:15:22.788 "allow_any_host": true, 00:15:22.788 "hosts": [] 00:15:22.788 }, 00:15:22.788 { 00:15:22.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:22.788 "subtype": "NVMe", 00:15:22.788 "listen_addresses": [ 00:15:22.788 { 00:15:22.788 "transport": "TCP", 00:15:22.788 "trtype": "TCP", 00:15:22.788 "adrfam": "IPv4", 00:15:22.788 "traddr": "10.0.0.2", 00:15:22.788 "trsvcid": "4420" 00:15:22.788 } 00:15:22.788 ], 00:15:22.788 "allow_any_host": true, 00:15:22.788 "hosts": [], 00:15:22.788 "serial_number": "SPDK00000000000001", 00:15:22.788 "model_number": "SPDK bdev Controller", 00:15:22.788 "max_namespaces": 32, 00:15:22.788 "min_cntlid": 1, 00:15:22.788 "max_cntlid": 65519, 00:15:22.788 "namespaces": [ 00:15:22.788 { 00:15:22.788 "nsid": 1, 00:15:22.788 "bdev_name": "Malloc0", 00:15:22.788 "name": "Malloc0", 00:15:22.788 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:15:22.788 "eui64": "ABCDEF0123456789", 00:15:22.788 "uuid": "c7d83339-05d9-4db8-9a23-928d138faa78" 00:15:22.788 } 00:15:22.788 ] 00:15:22.788 } 00:15:22.788 ] 00:15:22.788 15:05:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.788 15:05:53 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:15:22.788 [2024-11-20 15:05:53.500983] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:22.788 [2024-11-20 15:05:53.501032] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80329 ] 00:15:23.051 [2024-11-20 15:05:53.638400] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:15:23.051 [2024-11-20 15:05:53.638466] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:23.051 [2024-11-20 15:05:53.638474] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:23.051 [2024-11-20 15:05:53.638488] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:23.051 [2024-11-20 15:05:53.638502] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:15:23.051 [2024-11-20 15:05:53.638635] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:15:23.051 [2024-11-20 15:05:53.642718] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x13d1540 0 00:15:23.051 [2024-11-20 15:05:53.642790] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:23.051 [2024-11-20 15:05:53.642800] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:23.051 [2024-11-20 15:05:53.642810] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:23.051 [2024-11-20 15:05:53.642814] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:23.051 [2024-11-20 15:05:53.642858] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.051 [2024-11-20 15:05:53.642865] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.051 [2024-11-20 15:05:53.642869] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13d1540) 00:15:23.052 [2024-11-20 15:05:53.642886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:23.052 [2024-11-20 15:05:53.642922] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a220, cid 0, qid 0 00:15:23.052 [2024-11-20 15:05:53.650665] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.052 [2024-11-20 15:05:53.650690] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.052 [2024-11-20 15:05:53.650696] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.052 [2024-11-20 15:05:53.650701] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a220) on tqpair=0x13d1540 00:15:23.052 [2024-11-20 15:05:53.650716] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:23.052 [2024-11-20 15:05:53.650724] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:15:23.052 [2024-11-20 15:05:53.650731] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:15:23.052 [2024-11-20 15:05:53.650763] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.052 [2024-11-20 15:05:53.650774] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.052 [2024-11-20 15:05:53.650778] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13d1540) 00:15:23.052 [2024-11-20 15:05:53.650788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.052 [2024-11-20 15:05:53.650820] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a220, cid 0, qid 0 00:15:23.052 [2024-11-20 15:05:53.650902] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.052 [2024-11-20 15:05:53.650910] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.052 [2024-11-20 15:05:53.650915] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.052 [2024-11-20 15:05:53.650919] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a220) on tqpair=0x13d1540 00:15:23.052 [2024-11-20 15:05:53.650926] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:15:23.052 [2024-11-20 15:05:53.650935] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:15:23.052 [2024-11-20 15:05:53.650943] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.052 [2024-11-20 15:05:53.650948] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.052 [2024-11-20 15:05:53.650952] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13d1540) 00:15:23.052 [2024-11-20 15:05:53.650960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.052 [2024-11-20 15:05:53.650979] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a220, cid 0, qid 0 00:15:23.052 [2024-11-20 15:05:53.651048] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.052 [2024-11-20 15:05:53.651056] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.052 [2024-11-20 15:05:53.651060] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.052 [2024-11-20 15:05:53.651064] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a220) on tqpair=0x13d1540 00:15:23.052 [2024-11-20 15:05:53.651072] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:15:23.052 [2024-11-20 15:05:53.651081] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:15:23.052 [2024-11-20 15:05:53.651089] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.052 [2024-11-20 15:05:53.651093] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.052 [2024-11-20 15:05:53.651097] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13d1540) 00:15:23.052 [2024-11-20 15:05:53.651105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.052 [2024-11-20 15:05:53.651125] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a220, cid 0, qid 0 00:15:23.052 [2024-11-20 15:05:53.651174] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.052 [2024-11-20 15:05:53.651181] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.052 [2024-11-20 15:05:53.651185] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.052 [2024-11-20 15:05:53.651190] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a220) on tqpair=0x13d1540 00:15:23.052 [2024-11-20 15:05:53.651197] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:23.052 [2024-11-20 15:05:53.651208] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.052 [2024-11-20 15:05:53.651212] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.052 [2024-11-20 15:05:53.651216] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13d1540) 00:15:23.052 [2024-11-20 15:05:53.651224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.052 [2024-11-20 15:05:53.651241] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a220, cid 0, qid 0 00:15:23.052 [2024-11-20 15:05:53.651286] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.052 [2024-11-20 15:05:53.651299] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.052 [2024-11-20 15:05:53.651304] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.052 [2024-11-20 15:05:53.651308] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a220) on tqpair=0x13d1540 00:15:23.052 [2024-11-20 15:05:53.651315] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:15:23.052 [2024-11-20 15:05:53.651321] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:15:23.052 [2024-11-20 15:05:53.651330] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:23.052 [2024-11-20 15:05:53.651436] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:15:23.052 [2024-11-20 15:05:53.651449] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:23.052 [2024-11-20 15:05:53.651460] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.052 [2024-11-20 15:05:53.651464] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.052 [2024-11-20 15:05:53.651469] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13d1540) 00:15:23.052 [2024-11-20 15:05:53.651477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.052 [2024-11-20 15:05:53.651496] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a220, cid 0, qid 0 00:15:23.052 [2024-11-20 15:05:53.651552] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.052 [2024-11-20 15:05:53.651566] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.052 [2024-11-20 15:05:53.651571] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.052 [2024-11-20 15:05:53.651576] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a220) on tqpair=0x13d1540 00:15:23.052 [2024-11-20 15:05:53.651582] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:23.052 [2024-11-20 15:05:53.651593] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.052 [2024-11-20 15:05:53.651598] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.052 [2024-11-20 15:05:53.651602] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13d1540) 00:15:23.052 [2024-11-20 15:05:53.651610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.052 [2024-11-20 15:05:53.651629] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a220, cid 0, qid 0 00:15:23.052 [2024-11-20 15:05:53.651697] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.052 [2024-11-20 15:05:53.651706] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.052 [2024-11-20 15:05:53.651710] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.052 [2024-11-20 15:05:53.651714] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a220) on tqpair=0x13d1540 00:15:23.052 [2024-11-20 15:05:53.651721] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:23.052 [2024-11-20 15:05:53.651726] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:15:23.052 [2024-11-20 15:05:53.651735] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:15:23.052 [2024-11-20 15:05:53.651752] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:15:23.052 [2024-11-20 15:05:53.651763] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.052 [2024-11-20 15:05:53.651767] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.651771] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13d1540) 00:15:23.053 [2024-11-20 15:05:53.651780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.053 [2024-11-20 15:05:53.651800] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a220, cid 0, qid 0 00:15:23.053 [2024-11-20 15:05:53.651894] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:23.053 [2024-11-20 15:05:53.651901] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:23.053 [2024-11-20 15:05:53.651905] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.651909] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13d1540): datao=0, datal=4096, cccid=0 00:15:23.053 [2024-11-20 15:05:53.651915] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x140a220) on tqpair(0x13d1540): expected_datao=0, payload_size=4096 00:15:23.053 [2024-11-20 15:05:53.651924] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.651929] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.651938] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.053 [2024-11-20 15:05:53.651945] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.053 [2024-11-20 15:05:53.651948] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.651953] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a220) on tqpair=0x13d1540 00:15:23.053 [2024-11-20 15:05:53.651963] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:15:23.053 [2024-11-20 15:05:53.651968] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:15:23.053 [2024-11-20 15:05:53.651973] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:15:23.053 [2024-11-20 15:05:53.651979] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:15:23.053 [2024-11-20 15:05:53.651984] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:15:23.053 [2024-11-20 15:05:53.651990] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:15:23.053 [2024-11-20 15:05:53.652003] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:15:23.053 [2024-11-20 15:05:53.652012] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.652016] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.652020] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13d1540) 00:15:23.053 [2024-11-20 15:05:53.652029] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:23.053 [2024-11-20 15:05:53.652048] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a220, cid 0, qid 0 00:15:23.053 [2024-11-20 15:05:53.652109] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.053 [2024-11-20 15:05:53.652116] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.053 [2024-11-20 15:05:53.652120] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.652125] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a220) on tqpair=0x13d1540 00:15:23.053 [2024-11-20 15:05:53.652134] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.652138] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.652142] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13d1540) 00:15:23.053 [2024-11-20 15:05:53.652149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.053 [2024-11-20 15:05:53.652156] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.652160] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.652164] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x13d1540) 00:15:23.053 [2024-11-20 15:05:53.652170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.053 [2024-11-20 15:05:53.652177] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.652181] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.652185] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x13d1540) 00:15:23.053 [2024-11-20 15:05:53.652191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.053 [2024-11-20 15:05:53.652197] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.652201] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.652205] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13d1540) 00:15:23.053 [2024-11-20 15:05:53.652211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.053 [2024-11-20 15:05:53.652217] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:15:23.053 [2024-11-20 15:05:53.652230] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:23.053 [2024-11-20 15:05:53.652238] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.652242] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.652246] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13d1540) 00:15:23.053 [2024-11-20 15:05:53.652254] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.053 [2024-11-20 15:05:53.652274] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a220, cid 0, qid 0 00:15:23.053 [2024-11-20 15:05:53.652281] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a380, cid 1, qid 0 00:15:23.053 [2024-11-20 15:05:53.652286] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a4e0, cid 2, qid 0 00:15:23.053 [2024-11-20 15:05:53.652291] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a640, cid 3, qid 0 00:15:23.053 [2024-11-20 15:05:53.652296] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a7a0, cid 4, qid 0 00:15:23.053 [2024-11-20 15:05:53.652393] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.053 [2024-11-20 15:05:53.652408] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.053 [2024-11-20 15:05:53.652413] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.652418] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a7a0) on tqpair=0x13d1540 00:15:23.053 [2024-11-20 15:05:53.652425] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:15:23.053 [2024-11-20 15:05:53.652431] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:15:23.053 [2024-11-20 15:05:53.652442] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.652447] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.652451] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13d1540) 00:15:23.053 [2024-11-20 15:05:53.652459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.053 [2024-11-20 15:05:53.652477] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a7a0, cid 4, qid 0 00:15:23.053 [2024-11-20 15:05:53.652540] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:23.053 [2024-11-20 15:05:53.652547] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:23.053 [2024-11-20 15:05:53.652551] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.652555] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13d1540): datao=0, datal=4096, cccid=4 00:15:23.053 [2024-11-20 15:05:53.652560] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x140a7a0) on tqpair(0x13d1540): expected_datao=0, payload_size=4096 00:15:23.053 [2024-11-20 15:05:53.652569] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.652573] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.652581] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.053 [2024-11-20 15:05:53.652588] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.053 [2024-11-20 15:05:53.652592] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.652596] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a7a0) on tqpair=0x13d1540 00:15:23.053 [2024-11-20 15:05:53.652610] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:15:23.053 [2024-11-20 15:05:53.652637] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.652663] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.053 [2024-11-20 15:05:53.652667] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13d1540) 00:15:23.054 [2024-11-20 15:05:53.652675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.054 [2024-11-20 15:05:53.652684] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.054 [2024-11-20 15:05:53.652688] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.054 [2024-11-20 15:05:53.652692] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13d1540) 00:15:23.054 [2024-11-20 15:05:53.652699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.054 [2024-11-20 15:05:53.652725] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a7a0, cid 4, qid 0 00:15:23.054 [2024-11-20 15:05:53.652733] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a900, cid 5, qid 0 00:15:23.054 [2024-11-20 15:05:53.652844] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:23.054 [2024-11-20 15:05:53.652856] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:23.054 [2024-11-20 15:05:53.652861] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:23.054 [2024-11-20 15:05:53.652865] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13d1540): datao=0, datal=1024, cccid=4 00:15:23.054 [2024-11-20 15:05:53.652870] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x140a7a0) on tqpair(0x13d1540): expected_datao=0, payload_size=1024 00:15:23.054 [2024-11-20 15:05:53.652878] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:23.054 [2024-11-20 15:05:53.652883] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:23.054 [2024-11-20 15:05:53.652889] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.054 [2024-11-20 15:05:53.652896] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.054 [2024-11-20 15:05:53.652899] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.054 [2024-11-20 15:05:53.652904] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a900) on tqpair=0x13d1540 00:15:23.054 [2024-11-20 15:05:53.652923] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.054 [2024-11-20 15:05:53.652931] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.054 [2024-11-20 15:05:53.652935] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.054 [2024-11-20 15:05:53.652939] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a7a0) on tqpair=0x13d1540 00:15:23.054 [2024-11-20 15:05:53.652957] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.054 [2024-11-20 15:05:53.652963] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.054 [2024-11-20 15:05:53.652967] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13d1540) 00:15:23.054 [2024-11-20 15:05:53.652975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.054 [2024-11-20 15:05:53.653000] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a7a0, cid 4, qid 0 00:15:23.054 [2024-11-20 15:05:53.653075] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:23.054 [2024-11-20 15:05:53.653086] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:23.054 [2024-11-20 15:05:53.653091] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:23.054 [2024-11-20 15:05:53.653095] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13d1540): datao=0, datal=3072, cccid=4 00:15:23.054 [2024-11-20 15:05:53.653100] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x140a7a0) on tqpair(0x13d1540): expected_datao=0, payload_size=3072 00:15:23.054 [2024-11-20 15:05:53.653108] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:23.054 [2024-11-20 15:05:53.653113] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:23.054 [2024-11-20 15:05:53.653126] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.054 [2024-11-20 15:05:53.653132] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.054 [2024-11-20 15:05:53.653136] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.054 [2024-11-20 15:05:53.653140] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a7a0) on tqpair=0x13d1540 00:15:23.054 [2024-11-20 15:05:53.653151] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.054 [2024-11-20 15:05:53.653155] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.054 [2024-11-20 15:05:53.653159] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13d1540) 00:15:23.054 [2024-11-20 15:05:53.653167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.054 [2024-11-20 15:05:53.653190] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a7a0, cid 4, qid 0 00:15:23.054 [2024-11-20 15:05:53.653255] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:23.054 [2024-11-20 15:05:53.653262] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:23.054 [2024-11-20 15:05:53.653266] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:23.054 [2024-11-20 15:05:53.653270] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13d1540): datao=0, datal=8, cccid=4 00:15:23.054 [2024-11-20 15:05:53.653275] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x140a7a0) on tqpair(0x13d1540): expected_datao=0, payload_size=8 00:15:23.054 [2024-11-20 15:05:53.653283] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:23.054 [2024-11-20 15:05:53.653287] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:23.054 [2024-11-20 15:05:53.653301] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.054 [2024-11-20 15:05:53.653309] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.054 [2024-11-20 15:05:53.653312] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.054 ===================================================== 00:15:23.054 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:15:23.054 ===================================================== 00:15:23.054 Controller Capabilities/Features 00:15:23.054 ================================ 00:15:23.054 Vendor ID: 0000 00:15:23.054 Subsystem Vendor ID: 0000 00:15:23.054 Serial Number: .................... 00:15:23.054 Model Number: ........................................ 00:15:23.054 Firmware Version: 24.01.1 00:15:23.054 Recommended Arb Burst: 0 00:15:23.054 IEEE OUI Identifier: 00 00 00 00:15:23.054 Multi-path I/O 00:15:23.054 May have multiple subsystem ports: No 00:15:23.054 May have multiple controllers: No 00:15:23.054 Associated with SR-IOV VF: No 00:15:23.054 Max Data Transfer Size: 131072 00:15:23.054 Max Number of Namespaces: 0 00:15:23.054 Max Number of I/O Queues: 1024 00:15:23.054 NVMe Specification Version (VS): 1.3 00:15:23.054 NVMe Specification Version (Identify): 1.3 00:15:23.054 Maximum Queue Entries: 128 00:15:23.054 Contiguous Queues Required: Yes 00:15:23.054 Arbitration Mechanisms Supported 00:15:23.054 Weighted Round Robin: Not Supported 00:15:23.054 Vendor Specific: Not Supported 00:15:23.054 Reset Timeout: 15000 ms 00:15:23.054 Doorbell Stride: 4 bytes 00:15:23.054 NVM Subsystem Reset: Not Supported 00:15:23.054 Command Sets Supported 00:15:23.054 NVM Command Set: Supported 00:15:23.054 Boot Partition: Not Supported 00:15:23.054 Memory Page Size Minimum: 4096 bytes 00:15:23.054 Memory Page Size Maximum: 4096 bytes 00:15:23.054 Persistent Memory Region: Not Supported 00:15:23.054 Optional Asynchronous Events Supported 00:15:23.054 Namespace Attribute Notices: Not Supported 00:15:23.054 Firmware Activation Notices: Not Supported 00:15:23.054 ANA Change Notices: Not Supported 00:15:23.054 PLE Aggregate Log Change Notices: Not Supported 00:15:23.054 LBA Status Info Alert Notices: Not Supported 00:15:23.054 EGE Aggregate Log Change Notices: Not Supported 00:15:23.054 Normal NVM Subsystem Shutdown event: Not Supported 00:15:23.054 Zone Descriptor Change Notices: Not Supported 00:15:23.054 Discovery Log Change Notices: Supported 00:15:23.054 Controller Attributes 00:15:23.054 128-bit Host Identifier: Not Supported 00:15:23.054 Non-Operational Permissive Mode: Not Supported 00:15:23.054 NVM Sets: Not Supported 00:15:23.054 Read Recovery Levels: Not Supported 00:15:23.054 Endurance Groups: Not Supported 00:15:23.054 Predictable Latency Mode: Not Supported 00:15:23.054 Traffic Based Keep ALive: Not Supported 00:15:23.054 Namespace Granularity: Not Supported 00:15:23.054 SQ Associations: Not Supported 00:15:23.054 UUID List: Not Supported 00:15:23.054 Multi-Domain Subsystem: Not Supported 00:15:23.054 Fixed Capacity Management: Not Supported 00:15:23.054 Variable Capacity Management: Not Supported 00:15:23.054 Delete Endurance Group: Not Supported 00:15:23.054 Delete NVM Set: Not Supported 00:15:23.054 Extended LBA Formats Supported: Not Supported 00:15:23.055 Flexible Data Placement Supported: Not Supported 00:15:23.055 00:15:23.055 Controller Memory Buffer Support 00:15:23.055 ================================ 00:15:23.055 Supported: No 00:15:23.055 00:15:23.055 Persistent Memory Region Support 00:15:23.055 ================================ 00:15:23.055 Supported: No 00:15:23.055 00:15:23.055 Admin Command Set Attributes 00:15:23.055 ============================ 00:15:23.055 Security Send/Receive: Not Supported 00:15:23.055 Format NVM: Not Supported 00:15:23.055 Firmware Activate/Download: Not Supported 00:15:23.055 Namespace Management: Not Supported 00:15:23.055 Device Self-Test: Not Supported 00:15:23.055 Directives: Not Supported 00:15:23.055 NVMe-MI: Not Supported 00:15:23.055 Virtualization Management: Not Supported 00:15:23.055 Doorbell Buffer Config: Not Supported 00:15:23.055 Get LBA Status Capability: Not Supported 00:15:23.055 Command & Feature Lockdown Capability: Not Supported 00:15:23.055 Abort Command Limit: 1 00:15:23.055 Async Event Request Limit: 4 00:15:23.055 Number of Firmware Slots: N/A 00:15:23.055 Firmware Slot 1 Read-Only: N/A 00:15:23.055 Fi[2024-11-20 15:05:53.653317] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a7a0) on tqpair=0x13d1540 00:15:23.055 rmware Activation Without Reset: N/A 00:15:23.055 Multiple Update Detection Support: N/A 00:15:23.055 Firmware Update Granularity: No Information Provided 00:15:23.055 Per-Namespace SMART Log: No 00:15:23.055 Asymmetric Namespace Access Log Page: Not Supported 00:15:23.055 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:15:23.055 Command Effects Log Page: Not Supported 00:15:23.055 Get Log Page Extended Data: Supported 00:15:23.055 Telemetry Log Pages: Not Supported 00:15:23.055 Persistent Event Log Pages: Not Supported 00:15:23.055 Supported Log Pages Log Page: May Support 00:15:23.055 Commands Supported & Effects Log Page: Not Supported 00:15:23.055 Feature Identifiers & Effects Log Page:May Support 00:15:23.055 NVMe-MI Commands & Effects Log Page: May Support 00:15:23.055 Data Area 4 for Telemetry Log: Not Supported 00:15:23.055 Error Log Page Entries Supported: 128 00:15:23.055 Keep Alive: Not Supported 00:15:23.055 00:15:23.055 NVM Command Set Attributes 00:15:23.055 ========================== 00:15:23.055 Submission Queue Entry Size 00:15:23.055 Max: 1 00:15:23.055 Min: 1 00:15:23.055 Completion Queue Entry Size 00:15:23.055 Max: 1 00:15:23.055 Min: 1 00:15:23.055 Number of Namespaces: 0 00:15:23.055 Compare Command: Not Supported 00:15:23.055 Write Uncorrectable Command: Not Supported 00:15:23.055 Dataset Management Command: Not Supported 00:15:23.055 Write Zeroes Command: Not Supported 00:15:23.055 Set Features Save Field: Not Supported 00:15:23.055 Reservations: Not Supported 00:15:23.055 Timestamp: Not Supported 00:15:23.055 Copy: Not Supported 00:15:23.055 Volatile Write Cache: Not Present 00:15:23.055 Atomic Write Unit (Normal): 1 00:15:23.055 Atomic Write Unit (PFail): 1 00:15:23.055 Atomic Compare & Write Unit: 1 00:15:23.055 Fused Compare & Write: Supported 00:15:23.055 Scatter-Gather List 00:15:23.055 SGL Command Set: Supported 00:15:23.055 SGL Keyed: Supported 00:15:23.055 SGL Bit Bucket Descriptor: Not Supported 00:15:23.055 SGL Metadata Pointer: Not Supported 00:15:23.055 Oversized SGL: Not Supported 00:15:23.055 SGL Metadata Address: Not Supported 00:15:23.055 SGL Offset: Supported 00:15:23.055 Transport SGL Data Block: Not Supported 00:15:23.055 Replay Protected Memory Block: Not Supported 00:15:23.055 00:15:23.055 Firmware Slot Information 00:15:23.055 ========================= 00:15:23.055 Active slot: 0 00:15:23.055 00:15:23.055 00:15:23.055 Error Log 00:15:23.055 ========= 00:15:23.055 00:15:23.055 Active Namespaces 00:15:23.055 ================= 00:15:23.055 Discovery Log Page 00:15:23.055 ================== 00:15:23.055 Generation Counter: 2 00:15:23.055 Number of Records: 2 00:15:23.055 Record Format: 0 00:15:23.055 00:15:23.055 Discovery Log Entry 0 00:15:23.055 ---------------------- 00:15:23.055 Transport Type: 3 (TCP) 00:15:23.055 Address Family: 1 (IPv4) 00:15:23.055 Subsystem Type: 3 (Current Discovery Subsystem) 00:15:23.055 Entry Flags: 00:15:23.055 Duplicate Returned Information: 1 00:15:23.055 Explicit Persistent Connection Support for Discovery: 1 00:15:23.055 Transport Requirements: 00:15:23.055 Secure Channel: Not Required 00:15:23.055 Port ID: 0 (0x0000) 00:15:23.055 Controller ID: 65535 (0xffff) 00:15:23.055 Admin Max SQ Size: 128 00:15:23.055 Transport Service Identifier: 4420 00:15:23.055 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:15:23.055 Transport Address: 10.0.0.2 00:15:23.055 Discovery Log Entry 1 00:15:23.055 ---------------------- 00:15:23.055 Transport Type: 3 (TCP) 00:15:23.055 Address Family: 1 (IPv4) 00:15:23.055 Subsystem Type: 2 (NVM Subsystem) 00:15:23.055 Entry Flags: 00:15:23.055 Duplicate Returned Information: 0 00:15:23.055 Explicit Persistent Connection Support for Discovery: 0 00:15:23.055 Transport Requirements: 00:15:23.055 Secure Channel: Not Required 00:15:23.055 Port ID: 0 (0x0000) 00:15:23.055 Controller ID: 65535 (0xffff) 00:15:23.055 Admin Max SQ Size: 128 00:15:23.055 Transport Service Identifier: 4420 00:15:23.055 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:15:23.055 Transport Address: 10.0.0.2 [2024-11-20 15:05:53.653422] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:15:23.055 [2024-11-20 15:05:53.653439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.055 [2024-11-20 15:05:53.653447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.055 [2024-11-20 15:05:53.653453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.055 [2024-11-20 15:05:53.653460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.055 [2024-11-20 15:05:53.653470] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.055 [2024-11-20 15:05:53.653474] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.055 [2024-11-20 15:05:53.653478] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13d1540) 00:15:23.055 [2024-11-20 15:05:53.653487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.055 [2024-11-20 15:05:53.653509] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a640, cid 3, qid 0 00:15:23.055 [2024-11-20 15:05:53.653556] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.055 [2024-11-20 15:05:53.653563] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.055 [2024-11-20 15:05:53.653567] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.055 [2024-11-20 15:05:53.653571] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a640) on tqpair=0x13d1540 00:15:23.055 [2024-11-20 15:05:53.653580] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.055 [2024-11-20 15:05:53.653584] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.055 [2024-11-20 15:05:53.653588] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13d1540) 00:15:23.055 [2024-11-20 15:05:53.653596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.055 [2024-11-20 15:05:53.653618] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a640, cid 3, qid 0 00:15:23.055 [2024-11-20 15:05:53.653704] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.055 [2024-11-20 15:05:53.653714] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.055 [2024-11-20 15:05:53.653718] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.653722] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a640) on tqpair=0x13d1540 00:15:23.056 [2024-11-20 15:05:53.653729] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:15:23.056 [2024-11-20 15:05:53.653734] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:15:23.056 [2024-11-20 15:05:53.653745] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.653750] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.653754] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13d1540) 00:15:23.056 [2024-11-20 15:05:53.653762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.056 [2024-11-20 15:05:53.653783] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a640, cid 3, qid 0 00:15:23.056 [2024-11-20 15:05:53.653829] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.056 [2024-11-20 15:05:53.653836] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.056 [2024-11-20 15:05:53.653840] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.653844] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a640) on tqpair=0x13d1540 00:15:23.056 [2024-11-20 15:05:53.653857] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.653862] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.653866] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13d1540) 00:15:23.056 [2024-11-20 15:05:53.653874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.056 [2024-11-20 15:05:53.653890] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a640, cid 3, qid 0 00:15:23.056 [2024-11-20 15:05:53.653954] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.056 [2024-11-20 15:05:53.653966] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.056 [2024-11-20 15:05:53.653971] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.653975] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a640) on tqpair=0x13d1540 00:15:23.056 [2024-11-20 15:05:53.653988] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.653993] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.653997] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13d1540) 00:15:23.056 [2024-11-20 15:05:53.654004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.056 [2024-11-20 15:05:53.654022] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a640, cid 3, qid 0 00:15:23.056 [2024-11-20 15:05:53.654071] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.056 [2024-11-20 15:05:53.654082] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.056 [2024-11-20 15:05:53.654086] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.654091] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a640) on tqpair=0x13d1540 00:15:23.056 [2024-11-20 15:05:53.654103] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.654108] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.654112] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13d1540) 00:15:23.056 [2024-11-20 15:05:53.654120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.056 [2024-11-20 15:05:53.654137] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a640, cid 3, qid 0 00:15:23.056 [2024-11-20 15:05:53.654183] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.056 [2024-11-20 15:05:53.654190] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.056 [2024-11-20 15:05:53.654194] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.654198] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a640) on tqpair=0x13d1540 00:15:23.056 [2024-11-20 15:05:53.654209] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.654214] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.654218] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13d1540) 00:15:23.056 [2024-11-20 15:05:53.654226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.056 [2024-11-20 15:05:53.654243] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a640, cid 3, qid 0 00:15:23.056 [2024-11-20 15:05:53.654291] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.056 [2024-11-20 15:05:53.654298] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.056 [2024-11-20 15:05:53.654302] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.654306] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a640) on tqpair=0x13d1540 00:15:23.056 [2024-11-20 15:05:53.654317] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.654322] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.654326] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13d1540) 00:15:23.056 [2024-11-20 15:05:53.654334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.056 [2024-11-20 15:05:53.654351] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a640, cid 3, qid 0 00:15:23.056 [2024-11-20 15:05:53.654399] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.056 [2024-11-20 15:05:53.654406] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.056 [2024-11-20 15:05:53.654410] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.654414] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a640) on tqpair=0x13d1540 00:15:23.056 [2024-11-20 15:05:53.654425] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.654430] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.654434] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13d1540) 00:15:23.056 [2024-11-20 15:05:53.654442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.056 [2024-11-20 15:05:53.654459] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a640, cid 3, qid 0 00:15:23.056 [2024-11-20 15:05:53.654507] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.056 [2024-11-20 15:05:53.654514] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.056 [2024-11-20 15:05:53.654518] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.654522] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a640) on tqpair=0x13d1540 00:15:23.056 [2024-11-20 15:05:53.654534] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.654538] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.654543] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13d1540) 00:15:23.056 [2024-11-20 15:05:53.654550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.056 [2024-11-20 15:05:53.654567] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a640, cid 3, qid 0 00:15:23.056 [2024-11-20 15:05:53.654618] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.056 [2024-11-20 15:05:53.654624] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.056 [2024-11-20 15:05:53.654628] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.654633] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a640) on tqpair=0x13d1540 00:15:23.056 [2024-11-20 15:05:53.658670] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.658681] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.658685] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13d1540) 00:15:23.056 [2024-11-20 15:05:53.658694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.056 [2024-11-20 15:05:53.658720] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x140a640, cid 3, qid 0 00:15:23.056 [2024-11-20 15:05:53.658787] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.056 [2024-11-20 15:05:53.658795] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.056 [2024-11-20 15:05:53.658799] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.056 [2024-11-20 15:05:53.658803] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x140a640) on tqpair=0x13d1540 00:15:23.056 [2024-11-20 15:05:53.658813] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:15:23.056 00:15:23.057 15:05:53 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:15:23.057 [2024-11-20 15:05:53.690932] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:23.057 [2024-11-20 15:05:53.690991] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80331 ] 00:15:23.057 [2024-11-20 15:05:53.824997] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:15:23.057 [2024-11-20 15:05:53.825074] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:15:23.057 [2024-11-20 15:05:53.825081] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:15:23.057 [2024-11-20 15:05:53.825094] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:15:23.057 [2024-11-20 15:05:53.825108] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:15:23.057 [2024-11-20 15:05:53.825255] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:15:23.057 [2024-11-20 15:05:53.825312] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x957540 0 00:15:23.057 [2024-11-20 15:05:53.829660] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:15:23.057 [2024-11-20 15:05:53.829686] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:15:23.057 [2024-11-20 15:05:53.829692] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:15:23.057 [2024-11-20 15:05:53.829697] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:15:23.057 [2024-11-20 15:05:53.829749] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.057 [2024-11-20 15:05:53.829758] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.057 [2024-11-20 15:05:53.829762] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x957540) 00:15:23.057 [2024-11-20 15:05:53.829779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:15:23.057 [2024-11-20 15:05:53.829812] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990220, cid 0, qid 0 00:15:23.057 [2024-11-20 15:05:53.837671] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.057 [2024-11-20 15:05:53.837694] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.057 [2024-11-20 15:05:53.837700] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.057 [2024-11-20 15:05:53.837705] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990220) on tqpair=0x957540 00:15:23.057 [2024-11-20 15:05:53.837719] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:15:23.057 [2024-11-20 15:05:53.837728] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:15:23.057 [2024-11-20 15:05:53.837735] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:15:23.057 [2024-11-20 15:05:53.837751] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.057 [2024-11-20 15:05:53.837757] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.057 [2024-11-20 15:05:53.837761] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x957540) 00:15:23.057 [2024-11-20 15:05:53.837772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.057 [2024-11-20 15:05:53.837801] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990220, cid 0, qid 0 00:15:23.057 [2024-11-20 15:05:53.838098] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.057 [2024-11-20 15:05:53.838113] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.057 [2024-11-20 15:05:53.838119] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.057 [2024-11-20 15:05:53.838123] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990220) on tqpair=0x957540 00:15:23.057 [2024-11-20 15:05:53.838130] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:15:23.057 [2024-11-20 15:05:53.838138] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:15:23.057 [2024-11-20 15:05:53.838147] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.057 [2024-11-20 15:05:53.838151] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.057 [2024-11-20 15:05:53.838156] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x957540) 00:15:23.057 [2024-11-20 15:05:53.838164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.057 [2024-11-20 15:05:53.838184] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990220, cid 0, qid 0 00:15:23.057 [2024-11-20 15:05:53.838443] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.057 [2024-11-20 15:05:53.838458] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.057 [2024-11-20 15:05:53.838463] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.057 [2024-11-20 15:05:53.838468] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990220) on tqpair=0x957540 00:15:23.057 [2024-11-20 15:05:53.838475] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:15:23.057 [2024-11-20 15:05:53.838485] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:15:23.057 [2024-11-20 15:05:53.838493] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.057 [2024-11-20 15:05:53.838497] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.057 [2024-11-20 15:05:53.838502] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x957540) 00:15:23.057 [2024-11-20 15:05:53.838510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.057 [2024-11-20 15:05:53.838529] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990220, cid 0, qid 0 00:15:23.057 [2024-11-20 15:05:53.838802] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.057 [2024-11-20 15:05:53.838817] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.057 [2024-11-20 15:05:53.838822] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.057 [2024-11-20 15:05:53.838827] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990220) on tqpair=0x957540 00:15:23.057 [2024-11-20 15:05:53.838834] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:23.057 [2024-11-20 15:05:53.838846] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.057 [2024-11-20 15:05:53.838851] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.057 [2024-11-20 15:05:53.838856] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x957540) 00:15:23.057 [2024-11-20 15:05:53.838864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.057 [2024-11-20 15:05:53.838885] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990220, cid 0, qid 0 00:15:23.057 [2024-11-20 15:05:53.839177] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.057 [2024-11-20 15:05:53.839192] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.057 [2024-11-20 15:05:53.839197] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.057 [2024-11-20 15:05:53.839202] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990220) on tqpair=0x957540 00:15:23.057 [2024-11-20 15:05:53.839207] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:15:23.057 [2024-11-20 15:05:53.839213] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:15:23.058 [2024-11-20 15:05:53.839222] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:23.058 [2024-11-20 15:05:53.839328] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:15:23.058 [2024-11-20 15:05:53.839336] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:23.058 [2024-11-20 15:05:53.839346] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.058 [2024-11-20 15:05:53.839351] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.058 [2024-11-20 15:05:53.839356] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x957540) 00:15:23.058 [2024-11-20 15:05:53.839364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.058 [2024-11-20 15:05:53.839385] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990220, cid 0, qid 0 00:15:23.058 [2024-11-20 15:05:53.839773] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.058 [2024-11-20 15:05:53.839788] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.058 [2024-11-20 15:05:53.839793] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.058 [2024-11-20 15:05:53.839798] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990220) on tqpair=0x957540 00:15:23.058 [2024-11-20 15:05:53.839804] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:23.058 [2024-11-20 15:05:53.839815] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.058 [2024-11-20 15:05:53.839820] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.058 [2024-11-20 15:05:53.839824] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x957540) 00:15:23.058 [2024-11-20 15:05:53.839833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.058 [2024-11-20 15:05:53.839853] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990220, cid 0, qid 0 00:15:23.058 [2024-11-20 15:05:53.840070] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.058 [2024-11-20 15:05:53.840084] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.058 [2024-11-20 15:05:53.840089] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.058 [2024-11-20 15:05:53.840093] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990220) on tqpair=0x957540 00:15:23.058 [2024-11-20 15:05:53.840099] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:23.058 [2024-11-20 15:05:53.840105] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:15:23.058 [2024-11-20 15:05:53.840114] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:15:23.058 [2024-11-20 15:05:53.840132] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:15:23.058 [2024-11-20 15:05:53.840143] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.058 [2024-11-20 15:05:53.840148] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.058 [2024-11-20 15:05:53.840152] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x957540) 00:15:23.058 [2024-11-20 15:05:53.840161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.058 [2024-11-20 15:05:53.840182] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990220, cid 0, qid 0 00:15:23.058 [2024-11-20 15:05:53.840488] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:23.058 [2024-11-20 15:05:53.840504] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:23.058 [2024-11-20 15:05:53.840509] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:23.058 [2024-11-20 15:05:53.840514] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x957540): datao=0, datal=4096, cccid=0 00:15:23.058 [2024-11-20 15:05:53.840519] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x990220) on tqpair(0x957540): expected_datao=0, payload_size=4096 00:15:23.058 [2024-11-20 15:05:53.840529] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:23.058 [2024-11-20 15:05:53.840534] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:23.058 [2024-11-20 15:05:53.840544] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.058 [2024-11-20 15:05:53.840551] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.058 [2024-11-20 15:05:53.840555] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.058 [2024-11-20 15:05:53.840559] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990220) on tqpair=0x957540 00:15:23.058 [2024-11-20 15:05:53.840569] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:15:23.058 [2024-11-20 15:05:53.840575] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:15:23.058 [2024-11-20 15:05:53.840580] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:15:23.058 [2024-11-20 15:05:53.840585] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:15:23.058 [2024-11-20 15:05:53.840590] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:15:23.058 [2024-11-20 15:05:53.840596] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:15:23.058 [2024-11-20 15:05:53.840610] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:15:23.058 [2024-11-20 15:05:53.840619] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.058 [2024-11-20 15:05:53.840624] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.058 [2024-11-20 15:05:53.840628] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x957540) 00:15:23.058 [2024-11-20 15:05:53.840637] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:23.058 [2024-11-20 15:05:53.840673] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990220, cid 0, qid 0 00:15:23.058 [2024-11-20 15:05:53.841047] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.058 [2024-11-20 15:05:53.841061] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.058 [2024-11-20 15:05:53.841066] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.058 [2024-11-20 15:05:53.841070] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990220) on tqpair=0x957540 00:15:23.058 [2024-11-20 15:05:53.841079] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.058 [2024-11-20 15:05:53.841083] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.058 [2024-11-20 15:05:53.841088] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x957540) 00:15:23.058 [2024-11-20 15:05:53.841095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.058 [2024-11-20 15:05:53.841102] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.058 [2024-11-20 15:05:53.841107] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.058 [2024-11-20 15:05:53.841111] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x957540) 00:15:23.058 [2024-11-20 15:05:53.841117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.058 [2024-11-20 15:05:53.841124] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.058 [2024-11-20 15:05:53.841128] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.058 [2024-11-20 15:05:53.841132] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x957540) 00:15:23.058 [2024-11-20 15:05:53.841138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.058 [2024-11-20 15:05:53.841145] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.058 [2024-11-20 15:05:53.841149] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.058 [2024-11-20 15:05:53.841153] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x957540) 00:15:23.058 [2024-11-20 15:05:53.841159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.058 [2024-11-20 15:05:53.841165] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:23.058 [2024-11-20 15:05:53.841180] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:23.058 [2024-11-20 15:05:53.841188] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.058 [2024-11-20 15:05:53.841193] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.058 [2024-11-20 15:05:53.841197] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x957540) 00:15:23.059 [2024-11-20 15:05:53.841205] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.059 [2024-11-20 15:05:53.841227] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990220, cid 0, qid 0 00:15:23.059 [2024-11-20 15:05:53.841235] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990380, cid 1, qid 0 00:15:23.059 [2024-11-20 15:05:53.841240] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9904e0, cid 2, qid 0 00:15:23.059 [2024-11-20 15:05:53.841245] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990640, cid 3, qid 0 00:15:23.059 [2024-11-20 15:05:53.841251] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9907a0, cid 4, qid 0 00:15:23.059 [2024-11-20 15:05:53.841631] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.059 [2024-11-20 15:05:53.845655] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.059 [2024-11-20 15:05:53.845672] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.059 [2024-11-20 15:05:53.845678] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9907a0) on tqpair=0x957540 00:15:23.059 [2024-11-20 15:05:53.845684] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:15:23.059 [2024-11-20 15:05:53.845692] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:23.059 [2024-11-20 15:05:53.845704] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:15:23.059 [2024-11-20 15:05:53.845717] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:23.059 [2024-11-20 15:05:53.845726] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.059 [2024-11-20 15:05:53.845731] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.059 [2024-11-20 15:05:53.845736] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x957540) 00:15:23.059 [2024-11-20 15:05:53.845745] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:23.059 [2024-11-20 15:05:53.845772] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9907a0, cid 4, qid 0 00:15:23.059 [2024-11-20 15:05:53.846048] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.059 [2024-11-20 15:05:53.846064] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.059 [2024-11-20 15:05:53.846069] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.059 [2024-11-20 15:05:53.846073] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9907a0) on tqpair=0x957540 00:15:23.059 [2024-11-20 15:05:53.846140] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:15:23.059 [2024-11-20 15:05:53.846168] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:23.059 [2024-11-20 15:05:53.846184] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.059 [2024-11-20 15:05:53.846189] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.059 [2024-11-20 15:05:53.846193] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x957540) 00:15:23.059 [2024-11-20 15:05:53.846203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.059 [2024-11-20 15:05:53.846229] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9907a0, cid 4, qid 0 00:15:23.059 [2024-11-20 15:05:53.846621] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:23.059 [2024-11-20 15:05:53.846636] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:23.059 [2024-11-20 15:05:53.846654] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:23.059 [2024-11-20 15:05:53.846659] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x957540): datao=0, datal=4096, cccid=4 00:15:23.059 [2024-11-20 15:05:53.846665] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9907a0) on tqpair(0x957540): expected_datao=0, payload_size=4096 00:15:23.059 [2024-11-20 15:05:53.846674] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:23.059 [2024-11-20 15:05:53.846679] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:23.059 [2024-11-20 15:05:53.846689] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.059 [2024-11-20 15:05:53.846695] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.059 [2024-11-20 15:05:53.846699] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.059 [2024-11-20 15:05:53.846703] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9907a0) on tqpair=0x957540 00:15:23.059 [2024-11-20 15:05:53.846722] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:15:23.059 [2024-11-20 15:05:53.846733] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:15:23.059 [2024-11-20 15:05:53.846744] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:15:23.059 [2024-11-20 15:05:53.846753] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.059 [2024-11-20 15:05:53.846757] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.059 [2024-11-20 15:05:53.846762] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x957540) 00:15:23.059 [2024-11-20 15:05:53.846770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.059 [2024-11-20 15:05:53.846793] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9907a0, cid 4, qid 0 00:15:23.059 [2024-11-20 15:05:53.847111] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:23.059 [2024-11-20 15:05:53.847126] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:23.059 [2024-11-20 15:05:53.847131] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:23.059 [2024-11-20 15:05:53.847136] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x957540): datao=0, datal=4096, cccid=4 00:15:23.059 [2024-11-20 15:05:53.847141] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9907a0) on tqpair(0x957540): expected_datao=0, payload_size=4096 00:15:23.059 [2024-11-20 15:05:53.847149] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:23.059 [2024-11-20 15:05:53.847154] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:23.059 [2024-11-20 15:05:53.847163] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.059 [2024-11-20 15:05:53.847170] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.059 [2024-11-20 15:05:53.847174] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.059 [2024-11-20 15:05:53.847178] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9907a0) on tqpair=0x957540 00:15:23.059 [2024-11-20 15:05:53.847195] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:23.059 [2024-11-20 15:05:53.847207] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:23.059 [2024-11-20 15:05:53.847216] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.059 [2024-11-20 15:05:53.847220] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.059 [2024-11-20 15:05:53.847225] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x957540) 00:15:23.059 [2024-11-20 15:05:53.847233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.059 [2024-11-20 15:05:53.847254] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9907a0, cid 4, qid 0 00:15:23.059 [2024-11-20 15:05:53.847561] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:23.059 [2024-11-20 15:05:53.847576] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:23.059 [2024-11-20 15:05:53.847581] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:23.059 [2024-11-20 15:05:53.847585] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x957540): datao=0, datal=4096, cccid=4 00:15:23.059 [2024-11-20 15:05:53.847590] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9907a0) on tqpair(0x957540): expected_datao=0, payload_size=4096 00:15:23.059 [2024-11-20 15:05:53.847599] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:23.059 [2024-11-20 15:05:53.847603] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:23.059 [2024-11-20 15:05:53.847612] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.059 [2024-11-20 15:05:53.847619] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.059 [2024-11-20 15:05:53.847622] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.059 [2024-11-20 15:05:53.847627] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9907a0) on tqpair=0x957540 00:15:23.059 [2024-11-20 15:05:53.847636] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:23.059 [2024-11-20 15:05:53.847658] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:15:23.059 [2024-11-20 15:05:53.847677] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:15:23.059 [2024-11-20 15:05:53.847685] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:23.060 [2024-11-20 15:05:53.847691] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:15:23.060 [2024-11-20 15:05:53.847697] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:15:23.060 [2024-11-20 15:05:53.847702] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:15:23.060 [2024-11-20 15:05:53.847708] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:15:23.060 [2024-11-20 15:05:53.847727] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.060 [2024-11-20 15:05:53.847733] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.060 [2024-11-20 15:05:53.847737] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x957540) 00:15:23.060 [2024-11-20 15:05:53.847745] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.060 [2024-11-20 15:05:53.847753] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.060 [2024-11-20 15:05:53.847757] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.060 [2024-11-20 15:05:53.847761] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x957540) 00:15:23.060 [2024-11-20 15:05:53.847768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:15:23.060 [2024-11-20 15:05:53.847797] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9907a0, cid 4, qid 0 00:15:23.060 [2024-11-20 15:05:53.847806] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990900, cid 5, qid 0 00:15:23.060 [2024-11-20 15:05:53.848109] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.060 [2024-11-20 15:05:53.848125] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.060 [2024-11-20 15:05:53.848130] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.060 [2024-11-20 15:05:53.848135] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9907a0) on tqpair=0x957540 00:15:23.060 [2024-11-20 15:05:53.848142] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.060 [2024-11-20 15:05:53.848149] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.060 [2024-11-20 15:05:53.848153] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.060 [2024-11-20 15:05:53.848157] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990900) on tqpair=0x957540 00:15:23.060 [2024-11-20 15:05:53.848169] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.060 [2024-11-20 15:05:53.848174] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.060 [2024-11-20 15:05:53.848178] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x957540) 00:15:23.060 [2024-11-20 15:05:53.848186] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.060 [2024-11-20 15:05:53.848206] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990900, cid 5, qid 0 00:15:23.060 [2024-11-20 15:05:53.848477] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.060 [2024-11-20 15:05:53.848491] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.060 [2024-11-20 15:05:53.848496] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.060 [2024-11-20 15:05:53.848501] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990900) on tqpair=0x957540 00:15:23.060 [2024-11-20 15:05:53.848512] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.060 [2024-11-20 15:05:53.848517] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.060 [2024-11-20 15:05:53.848521] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x957540) 00:15:23.060 [2024-11-20 15:05:53.848529] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.060 [2024-11-20 15:05:53.848548] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990900, cid 5, qid 0 00:15:23.060 [2024-11-20 15:05:53.848827] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.060 [2024-11-20 15:05:53.848842] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.060 [2024-11-20 15:05:53.848847] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.060 [2024-11-20 15:05:53.848852] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990900) on tqpair=0x957540 00:15:23.060 [2024-11-20 15:05:53.848864] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.060 [2024-11-20 15:05:53.848869] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.060 [2024-11-20 15:05:53.848873] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x957540) 00:15:23.060 [2024-11-20 15:05:53.848881] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.060 [2024-11-20 15:05:53.848901] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990900, cid 5, qid 0 00:15:23.060 [2024-11-20 15:05:53.849130] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.060 [2024-11-20 15:05:53.849144] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.060 [2024-11-20 15:05:53.849149] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.060 [2024-11-20 15:05:53.849153] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990900) on tqpair=0x957540 00:15:23.060 [2024-11-20 15:05:53.849168] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.060 [2024-11-20 15:05:53.849173] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.060 [2024-11-20 15:05:53.849178] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x957540) 00:15:23.060 [2024-11-20 15:05:53.849186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.060 [2024-11-20 15:05:53.849194] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.060 [2024-11-20 15:05:53.849198] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.060 [2024-11-20 15:05:53.849202] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x957540) 00:15:23.060 [2024-11-20 15:05:53.849209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.060 [2024-11-20 15:05:53.849217] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.060 [2024-11-20 15:05:53.849221] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.060 [2024-11-20 15:05:53.849225] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x957540) 00:15:23.060 [2024-11-20 15:05:53.849232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.060 [2024-11-20 15:05:53.849241] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.060 [2024-11-20 15:05:53.849245] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.060 [2024-11-20 15:05:53.849249] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x957540) 00:15:23.060 [2024-11-20 15:05:53.849256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.060 [2024-11-20 15:05:53.849277] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990900, cid 5, qid 0 00:15:23.060 [2024-11-20 15:05:53.849284] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9907a0, cid 4, qid 0 00:15:23.060 [2024-11-20 15:05:53.849289] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990a60, cid 6, qid 0 00:15:23.060 [2024-11-20 15:05:53.849295] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990bc0, cid 7, qid 0 00:15:23.321 [2024-11-20 15:05:53.853665] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:23.321 [2024-11-20 15:05:53.853686] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:23.321 [2024-11-20 15:05:53.853691] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:23.321 [2024-11-20 15:05:53.853696] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x957540): datao=0, datal=8192, cccid=5 00:15:23.321 [2024-11-20 15:05:53.853701] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x990900) on tqpair(0x957540): expected_datao=0, payload_size=8192 00:15:23.321 [2024-11-20 15:05:53.853711] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:23.321 [2024-11-20 15:05:53.853716] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:23.321 [2024-11-20 15:05:53.853723] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:23.321 [2024-11-20 15:05:53.853729] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:23.321 [2024-11-20 15:05:53.853733] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:23.321 [2024-11-20 15:05:53.853737] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x957540): datao=0, datal=512, cccid=4 00:15:23.321 [2024-11-20 15:05:53.853742] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9907a0) on tqpair(0x957540): expected_datao=0, payload_size=512 00:15:23.321 [2024-11-20 15:05:53.853750] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:23.321 [2024-11-20 15:05:53.853754] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:23.321 [2024-11-20 15:05:53.853760] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:23.321 [2024-11-20 15:05:53.853766] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:23.321 [2024-11-20 15:05:53.853770] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:23.321 [2024-11-20 15:05:53.853774] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x957540): datao=0, datal=512, cccid=6 00:15:23.321 [2024-11-20 15:05:53.853778] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x990a60) on tqpair(0x957540): expected_datao=0, payload_size=512 00:15:23.321 [2024-11-20 15:05:53.853786] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:23.321 [2024-11-20 15:05:53.853790] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:23.321 [2024-11-20 15:05:53.853796] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:15:23.321 [2024-11-20 15:05:53.853802] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:15:23.321 [2024-11-20 15:05:53.853806] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:15:23.321 [2024-11-20 15:05:53.853810] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x957540): datao=0, datal=4096, cccid=7 00:15:23.321 [2024-11-20 15:05:53.853815] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x990bc0) on tqpair(0x957540): expected_datao=0, payload_size=4096 00:15:23.321 [2024-11-20 15:05:53.853823] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:15:23.321 [2024-11-20 15:05:53.853827] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:15:23.321 [2024-11-20 15:05:53.853833] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.321 [2024-11-20 15:05:53.853839] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.321 [2024-11-20 15:05:53.853843] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.321 [2024-11-20 15:05:53.853848] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990900) on tqpair=0x957540 00:15:23.321 [2024-11-20 15:05:53.853868] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.321 [2024-11-20 15:05:53.853875] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.321 [2024-11-20 15:05:53.853879] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.321 [2024-11-20 15:05:53.853883] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9907a0) on tqpair=0x957540 00:15:23.321 [2024-11-20 15:05:53.853894] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.321 [2024-11-20 15:05:53.853901] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.321 [2024-11-20 15:05:53.853905] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.321 [2024-11-20 15:05:53.853909] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990a60) on tqpair=0x957540 00:15:23.321 [2024-11-20 15:05:53.853917] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.321 [2024-11-20 15:05:53.853923] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.321 [2024-11-20 15:05:53.853927] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.321 [2024-11-20 15:05:53.853932] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990bc0) on tqpair=0x957540 00:15:23.321 ===================================================== 00:15:23.321 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:23.321 ===================================================== 00:15:23.321 Controller Capabilities/Features 00:15:23.321 ================================ 00:15:23.321 Vendor ID: 8086 00:15:23.321 Subsystem Vendor ID: 8086 00:15:23.321 Serial Number: SPDK00000000000001 00:15:23.321 Model Number: SPDK bdev Controller 00:15:23.321 Firmware Version: 24.01.1 00:15:23.321 Recommended Arb Burst: 6 00:15:23.321 IEEE OUI Identifier: e4 d2 5c 00:15:23.321 Multi-path I/O 00:15:23.321 May have multiple subsystem ports: Yes 00:15:23.321 May have multiple controllers: Yes 00:15:23.321 Associated with SR-IOV VF: No 00:15:23.321 Max Data Transfer Size: 131072 00:15:23.321 Max Number of Namespaces: 32 00:15:23.321 Max Number of I/O Queues: 127 00:15:23.321 NVMe Specification Version (VS): 1.3 00:15:23.321 NVMe Specification Version (Identify): 1.3 00:15:23.321 Maximum Queue Entries: 128 00:15:23.321 Contiguous Queues Required: Yes 00:15:23.321 Arbitration Mechanisms Supported 00:15:23.321 Weighted Round Robin: Not Supported 00:15:23.321 Vendor Specific: Not Supported 00:15:23.321 Reset Timeout: 15000 ms 00:15:23.321 Doorbell Stride: 4 bytes 00:15:23.321 NVM Subsystem Reset: Not Supported 00:15:23.321 Command Sets Supported 00:15:23.321 NVM Command Set: Supported 00:15:23.321 Boot Partition: Not Supported 00:15:23.321 Memory Page Size Minimum: 4096 bytes 00:15:23.321 Memory Page Size Maximum: 4096 bytes 00:15:23.321 Persistent Memory Region: Not Supported 00:15:23.321 Optional Asynchronous Events Supported 00:15:23.321 Namespace Attribute Notices: Supported 00:15:23.321 Firmware Activation Notices: Not Supported 00:15:23.321 ANA Change Notices: Not Supported 00:15:23.321 PLE Aggregate Log Change Notices: Not Supported 00:15:23.321 LBA Status Info Alert Notices: Not Supported 00:15:23.321 EGE Aggregate Log Change Notices: Not Supported 00:15:23.321 Normal NVM Subsystem Shutdown event: Not Supported 00:15:23.321 Zone Descriptor Change Notices: Not Supported 00:15:23.322 Discovery Log Change Notices: Not Supported 00:15:23.322 Controller Attributes 00:15:23.322 128-bit Host Identifier: Supported 00:15:23.322 Non-Operational Permissive Mode: Not Supported 00:15:23.322 NVM Sets: Not Supported 00:15:23.322 Read Recovery Levels: Not Supported 00:15:23.322 Endurance Groups: Not Supported 00:15:23.322 Predictable Latency Mode: Not Supported 00:15:23.322 Traffic Based Keep ALive: Not Supported 00:15:23.322 Namespace Granularity: Not Supported 00:15:23.322 SQ Associations: Not Supported 00:15:23.322 UUID List: Not Supported 00:15:23.322 Multi-Domain Subsystem: Not Supported 00:15:23.322 Fixed Capacity Management: Not Supported 00:15:23.322 Variable Capacity Management: Not Supported 00:15:23.322 Delete Endurance Group: Not Supported 00:15:23.322 Delete NVM Set: Not Supported 00:15:23.322 Extended LBA Formats Supported: Not Supported 00:15:23.322 Flexible Data Placement Supported: Not Supported 00:15:23.322 00:15:23.322 Controller Memory Buffer Support 00:15:23.322 ================================ 00:15:23.322 Supported: No 00:15:23.322 00:15:23.322 Persistent Memory Region Support 00:15:23.322 ================================ 00:15:23.322 Supported: No 00:15:23.322 00:15:23.322 Admin Command Set Attributes 00:15:23.322 ============================ 00:15:23.322 Security Send/Receive: Not Supported 00:15:23.322 Format NVM: Not Supported 00:15:23.322 Firmware Activate/Download: Not Supported 00:15:23.322 Namespace Management: Not Supported 00:15:23.322 Device Self-Test: Not Supported 00:15:23.322 Directives: Not Supported 00:15:23.322 NVMe-MI: Not Supported 00:15:23.322 Virtualization Management: Not Supported 00:15:23.322 Doorbell Buffer Config: Not Supported 00:15:23.322 Get LBA Status Capability: Not Supported 00:15:23.322 Command & Feature Lockdown Capability: Not Supported 00:15:23.322 Abort Command Limit: 4 00:15:23.322 Async Event Request Limit: 4 00:15:23.322 Number of Firmware Slots: N/A 00:15:23.322 Firmware Slot 1 Read-Only: N/A 00:15:23.322 Firmware Activation Without Reset: N/A 00:15:23.322 Multiple Update Detection Support: N/A 00:15:23.322 Firmware Update Granularity: No Information Provided 00:15:23.322 Per-Namespace SMART Log: No 00:15:23.322 Asymmetric Namespace Access Log Page: Not Supported 00:15:23.322 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:15:23.322 Command Effects Log Page: Supported 00:15:23.322 Get Log Page Extended Data: Supported 00:15:23.322 Telemetry Log Pages: Not Supported 00:15:23.322 Persistent Event Log Pages: Not Supported 00:15:23.322 Supported Log Pages Log Page: May Support 00:15:23.322 Commands Supported & Effects Log Page: Not Supported 00:15:23.322 Feature Identifiers & Effects Log Page:May Support 00:15:23.322 NVMe-MI Commands & Effects Log Page: May Support 00:15:23.322 Data Area 4 for Telemetry Log: Not Supported 00:15:23.322 Error Log Page Entries Supported: 128 00:15:23.322 Keep Alive: Supported 00:15:23.322 Keep Alive Granularity: 10000 ms 00:15:23.322 00:15:23.322 NVM Command Set Attributes 00:15:23.322 ========================== 00:15:23.322 Submission Queue Entry Size 00:15:23.322 Max: 64 00:15:23.322 Min: 64 00:15:23.322 Completion Queue Entry Size 00:15:23.322 Max: 16 00:15:23.322 Min: 16 00:15:23.322 Number of Namespaces: 32 00:15:23.322 Compare Command: Supported 00:15:23.322 Write Uncorrectable Command: Not Supported 00:15:23.322 Dataset Management Command: Supported 00:15:23.322 Write Zeroes Command: Supported 00:15:23.322 Set Features Save Field: Not Supported 00:15:23.322 Reservations: Supported 00:15:23.322 Timestamp: Not Supported 00:15:23.322 Copy: Supported 00:15:23.322 Volatile Write Cache: Present 00:15:23.322 Atomic Write Unit (Normal): 1 00:15:23.322 Atomic Write Unit (PFail): 1 00:15:23.322 Atomic Compare & Write Unit: 1 00:15:23.322 Fused Compare & Write: Supported 00:15:23.322 Scatter-Gather List 00:15:23.322 SGL Command Set: Supported 00:15:23.322 SGL Keyed: Supported 00:15:23.322 SGL Bit Bucket Descriptor: Not Supported 00:15:23.322 SGL Metadata Pointer: Not Supported 00:15:23.322 Oversized SGL: Not Supported 00:15:23.322 SGL Metadata Address: Not Supported 00:15:23.322 SGL Offset: Supported 00:15:23.322 Transport SGL Data Block: Not Supported 00:15:23.322 Replay Protected Memory Block: Not Supported 00:15:23.322 00:15:23.322 Firmware Slot Information 00:15:23.322 ========================= 00:15:23.322 Active slot: 1 00:15:23.322 Slot 1 Firmware Revision: 24.01.1 00:15:23.322 00:15:23.322 00:15:23.322 Commands Supported and Effects 00:15:23.322 ============================== 00:15:23.322 Admin Commands 00:15:23.322 -------------- 00:15:23.322 Get Log Page (02h): Supported 00:15:23.322 Identify (06h): Supported 00:15:23.322 Abort (08h): Supported 00:15:23.322 Set Features (09h): Supported 00:15:23.322 Get Features (0Ah): Supported 00:15:23.322 Asynchronous Event Request (0Ch): Supported 00:15:23.322 Keep Alive (18h): Supported 00:15:23.322 I/O Commands 00:15:23.322 ------------ 00:15:23.322 Flush (00h): Supported LBA-Change 00:15:23.322 Write (01h): Supported LBA-Change 00:15:23.322 Read (02h): Supported 00:15:23.322 Compare (05h): Supported 00:15:23.322 Write Zeroes (08h): Supported LBA-Change 00:15:23.322 Dataset Management (09h): Supported LBA-Change 00:15:23.322 Copy (19h): Supported LBA-Change 00:15:23.322 Unknown (79h): Supported LBA-Change 00:15:23.322 Unknown (7Ah): Supported 00:15:23.322 00:15:23.322 Error Log 00:15:23.322 ========= 00:15:23.322 00:15:23.322 Arbitration 00:15:23.322 =========== 00:15:23.322 Arbitration Burst: 1 00:15:23.322 00:15:23.322 Power Management 00:15:23.322 ================ 00:15:23.322 Number of Power States: 1 00:15:23.322 Current Power State: Power State #0 00:15:23.322 Power State #0: 00:15:23.322 Max Power: 0.00 W 00:15:23.322 Non-Operational State: Operational 00:15:23.322 Entry Latency: Not Reported 00:15:23.322 Exit Latency: Not Reported 00:15:23.322 Relative Read Throughput: 0 00:15:23.322 Relative Read Latency: 0 00:15:23.322 Relative Write Throughput: 0 00:15:23.322 Relative Write Latency: 0 00:15:23.322 Idle Power: Not Reported 00:15:23.322 Active Power: Not Reported 00:15:23.322 Non-Operational Permissive Mode: Not Supported 00:15:23.322 00:15:23.322 Health Information 00:15:23.322 ================== 00:15:23.322 Critical Warnings: 00:15:23.322 Available Spare Space: OK 00:15:23.322 Temperature: OK 00:15:23.322 Device Reliability: OK 00:15:23.322 Read Only: No 00:15:23.322 Volatile Memory Backup: OK 00:15:23.322 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:23.322 Temperature Threshold: [2024-11-20 15:05:53.854055] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.322 [2024-11-20 15:05:53.854063] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.322 [2024-11-20 15:05:53.854067] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x957540) 00:15:23.322 [2024-11-20 15:05:53.854077] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.322 [2024-11-20 15:05:53.854106] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990bc0, cid 7, qid 0 00:15:23.322 [2024-11-20 15:05:53.854422] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.322 [2024-11-20 15:05:53.854439] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.322 [2024-11-20 15:05:53.854444] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.854449] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990bc0) on tqpair=0x957540 00:15:23.323 [2024-11-20 15:05:53.854487] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:15:23.323 [2024-11-20 15:05:53.854503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.323 [2024-11-20 15:05:53.854511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.323 [2024-11-20 15:05:53.854518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.323 [2024-11-20 15:05:53.854525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:23.323 [2024-11-20 15:05:53.854535] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.854540] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.854544] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x957540) 00:15:23.323 [2024-11-20 15:05:53.854553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.323 [2024-11-20 15:05:53.854577] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990640, cid 3, qid 0 00:15:23.323 [2024-11-20 15:05:53.854940] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.323 [2024-11-20 15:05:53.854956] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.323 [2024-11-20 15:05:53.854961] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.854966] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990640) on tqpair=0x957540 00:15:23.323 [2024-11-20 15:05:53.854975] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.854980] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.854984] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x957540) 00:15:23.323 [2024-11-20 15:05:53.854992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.323 [2024-11-20 15:05:53.855028] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990640, cid 3, qid 0 00:15:23.323 [2024-11-20 15:05:53.855363] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.323 [2024-11-20 15:05:53.855378] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.323 [2024-11-20 15:05:53.855383] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.855387] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990640) on tqpair=0x957540 00:15:23.323 [2024-11-20 15:05:53.855393] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:15:23.323 [2024-11-20 15:05:53.855399] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:15:23.323 [2024-11-20 15:05:53.855410] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.855415] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.855420] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x957540) 00:15:23.323 [2024-11-20 15:05:53.855428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.323 [2024-11-20 15:05:53.855447] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990640, cid 3, qid 0 00:15:23.323 [2024-11-20 15:05:53.855734] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.323 [2024-11-20 15:05:53.855749] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.323 [2024-11-20 15:05:53.855754] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.855759] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990640) on tqpair=0x957540 00:15:23.323 [2024-11-20 15:05:53.855772] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.855777] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.855781] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x957540) 00:15:23.323 [2024-11-20 15:05:53.855789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.323 [2024-11-20 15:05:53.855810] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990640, cid 3, qid 0 00:15:23.323 [2024-11-20 15:05:53.856103] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.323 [2024-11-20 15:05:53.856117] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.323 [2024-11-20 15:05:53.856122] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.856127] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990640) on tqpair=0x957540 00:15:23.323 [2024-11-20 15:05:53.856138] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.856143] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.856147] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x957540) 00:15:23.323 [2024-11-20 15:05:53.856155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.323 [2024-11-20 15:05:53.856175] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990640, cid 3, qid 0 00:15:23.323 [2024-11-20 15:05:53.856401] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.323 [2024-11-20 15:05:53.856414] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.323 [2024-11-20 15:05:53.856419] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.856424] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990640) on tqpair=0x957540 00:15:23.323 [2024-11-20 15:05:53.856435] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.856440] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.856444] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x957540) 00:15:23.323 [2024-11-20 15:05:53.856452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.323 [2024-11-20 15:05:53.856471] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990640, cid 3, qid 0 00:15:23.323 [2024-11-20 15:05:53.856772] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.323 [2024-11-20 15:05:53.856787] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.323 [2024-11-20 15:05:53.856792] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.856797] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990640) on tqpair=0x957540 00:15:23.323 [2024-11-20 15:05:53.856808] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.856813] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.856818] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x957540) 00:15:23.323 [2024-11-20 15:05:53.856826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.323 [2024-11-20 15:05:53.856846] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990640, cid 3, qid 0 00:15:23.323 [2024-11-20 15:05:53.857123] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.323 [2024-11-20 15:05:53.857134] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.323 [2024-11-20 15:05:53.857139] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.857143] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990640) on tqpair=0x957540 00:15:23.323 [2024-11-20 15:05:53.857155] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.857159] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.857164] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x957540) 00:15:23.323 [2024-11-20 15:05:53.857172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.323 [2024-11-20 15:05:53.857190] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990640, cid 3, qid 0 00:15:23.323 [2024-11-20 15:05:53.857480] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.323 [2024-11-20 15:05:53.857493] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.323 [2024-11-20 15:05:53.857498] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.857502] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990640) on tqpair=0x957540 00:15:23.323 [2024-11-20 15:05:53.857514] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.857519] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.323 [2024-11-20 15:05:53.857523] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x957540) 00:15:23.323 [2024-11-20 15:05:53.857531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.323 [2024-11-20 15:05:53.857552] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990640, cid 3, qid 0 00:15:23.323 [2024-11-20 15:05:53.861656] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.323 [2024-11-20 15:05:53.861677] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.324 [2024-11-20 15:05:53.861683] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.324 [2024-11-20 15:05:53.861687] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990640) on tqpair=0x957540 00:15:23.324 [2024-11-20 15:05:53.861702] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:15:23.324 [2024-11-20 15:05:53.861708] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:15:23.324 [2024-11-20 15:05:53.861712] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x957540) 00:15:23.324 [2024-11-20 15:05:53.861721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:23.324 [2024-11-20 15:05:53.861747] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x990640, cid 3, qid 0 00:15:23.324 [2024-11-20 15:05:53.862130] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:15:23.324 [2024-11-20 15:05:53.862145] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:15:23.324 [2024-11-20 15:05:53.862150] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:15:23.324 [2024-11-20 15:05:53.862155] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x990640) on tqpair=0x957540 00:15:23.324 [2024-11-20 15:05:53.862164] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:15:23.324 0 Kelvin (-273 Celsius) 00:15:23.324 Available Spare: 0% 00:15:23.324 Available Spare Threshold: 0% 00:15:23.324 Life Percentage Used: 0% 00:15:23.324 Data Units Read: 0 00:15:23.324 Data Units Written: 0 00:15:23.324 Host Read Commands: 0 00:15:23.324 Host Write Commands: 0 00:15:23.324 Controller Busy Time: 0 minutes 00:15:23.324 Power Cycles: 0 00:15:23.324 Power On Hours: 0 hours 00:15:23.324 Unsafe Shutdowns: 0 00:15:23.324 Unrecoverable Media Errors: 0 00:15:23.324 Lifetime Error Log Entries: 0 00:15:23.324 Warning Temperature Time: 0 minutes 00:15:23.324 Critical Temperature Time: 0 minutes 00:15:23.324 00:15:23.324 Number of Queues 00:15:23.324 ================ 00:15:23.324 Number of I/O Submission Queues: 127 00:15:23.324 Number of I/O Completion Queues: 127 00:15:23.324 00:15:23.324 Active Namespaces 00:15:23.324 ================= 00:15:23.324 Namespace ID:1 00:15:23.324 Error Recovery Timeout: Unlimited 00:15:23.324 Command Set Identifier: NVM (00h) 00:15:23.324 Deallocate: Supported 00:15:23.324 Deallocated/Unwritten Error: Not Supported 00:15:23.324 Deallocated Read Value: Unknown 00:15:23.324 Deallocate in Write Zeroes: Not Supported 00:15:23.324 Deallocated Guard Field: 0xFFFF 00:15:23.324 Flush: Supported 00:15:23.324 Reservation: Supported 00:15:23.324 Namespace Sharing Capabilities: Multiple Controllers 00:15:23.324 Size (in LBAs): 131072 (0GiB) 00:15:23.324 Capacity (in LBAs): 131072 (0GiB) 00:15:23.324 Utilization (in LBAs): 131072 (0GiB) 00:15:23.324 NGUID: ABCDEF0123456789ABCDEF0123456789 00:15:23.324 EUI64: ABCDEF0123456789 00:15:23.324 UUID: c7d83339-05d9-4db8-9a23-928d138faa78 00:15:23.324 Thin Provisioning: Not Supported 00:15:23.324 Per-NS Atomic Units: Yes 00:15:23.324 Atomic Boundary Size (Normal): 0 00:15:23.324 Atomic Boundary Size (PFail): 0 00:15:23.324 Atomic Boundary Offset: 0 00:15:23.324 Maximum Single Source Range Length: 65535 00:15:23.324 Maximum Copy Length: 65535 00:15:23.324 Maximum Source Range Count: 1 00:15:23.324 NGUID/EUI64 Never Reused: No 00:15:23.324 Namespace Write Protected: No 00:15:23.324 Number of LBA Formats: 1 00:15:23.324 Current LBA Format: LBA Format #00 00:15:23.324 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:23.324 00:15:23.324 15:05:53 -- host/identify.sh@51 -- # sync 00:15:23.324 15:05:53 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:23.324 15:05:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.324 15:05:53 -- common/autotest_common.sh@10 -- # set +x 00:15:23.324 15:05:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.324 15:05:53 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:15:23.324 15:05:53 -- host/identify.sh@56 -- # nvmftestfini 00:15:23.324 15:05:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:23.324 15:05:53 -- nvmf/common.sh@116 -- # sync 00:15:23.324 15:05:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:23.324 15:05:53 -- nvmf/common.sh@119 -- # set +e 00:15:23.324 15:05:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:23.324 15:05:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:23.324 rmmod nvme_tcp 00:15:23.324 rmmod nvme_fabrics 00:15:23.324 rmmod nvme_keyring 00:15:23.324 15:05:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:23.324 15:05:54 -- nvmf/common.sh@123 -- # set -e 00:15:23.324 15:05:54 -- nvmf/common.sh@124 -- # return 0 00:15:23.324 15:05:54 -- nvmf/common.sh@477 -- # '[' -n 80288 ']' 00:15:23.324 15:05:54 -- nvmf/common.sh@478 -- # killprocess 80288 00:15:23.324 15:05:54 -- common/autotest_common.sh@936 -- # '[' -z 80288 ']' 00:15:23.324 15:05:54 -- common/autotest_common.sh@940 -- # kill -0 80288 00:15:23.324 15:05:54 -- common/autotest_common.sh@941 -- # uname 00:15:23.324 15:05:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:23.324 15:05:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80288 00:15:23.324 15:05:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:23.324 15:05:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:23.324 killing process with pid 80288 00:15:23.324 15:05:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80288' 00:15:23.324 15:05:54 -- common/autotest_common.sh@955 -- # kill 80288 00:15:23.324 [2024-11-20 15:05:54.046737] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:15:23.324 15:05:54 -- common/autotest_common.sh@960 -- # wait 80288 00:15:23.582 15:05:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:23.582 15:05:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:23.582 15:05:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:23.582 15:05:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:23.582 15:05:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:23.582 15:05:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.582 15:05:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:23.582 15:05:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.582 15:05:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:23.582 00:15:23.582 real 0m2.590s 00:15:23.582 user 0m7.373s 00:15:23.582 sys 0m0.574s 00:15:23.583 15:05:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:23.583 15:05:54 -- common/autotest_common.sh@10 -- # set +x 00:15:23.583 ************************************ 00:15:23.583 END TEST nvmf_identify 00:15:23.583 ************************************ 00:15:23.583 15:05:54 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:23.583 15:05:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:23.583 15:05:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:23.583 15:05:54 -- common/autotest_common.sh@10 -- # set +x 00:15:23.583 ************************************ 00:15:23.583 START TEST nvmf_perf 00:15:23.583 ************************************ 00:15:23.583 15:05:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:15:23.583 * Looking for test storage... 00:15:23.841 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:23.841 15:05:54 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:23.841 15:05:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:23.841 15:05:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:23.842 15:05:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:23.842 15:05:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:23.842 15:05:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:23.842 15:05:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:23.842 15:05:54 -- scripts/common.sh@335 -- # IFS=.-: 00:15:23.842 15:05:54 -- scripts/common.sh@335 -- # read -ra ver1 00:15:23.842 15:05:54 -- scripts/common.sh@336 -- # IFS=.-: 00:15:23.842 15:05:54 -- scripts/common.sh@336 -- # read -ra ver2 00:15:23.842 15:05:54 -- scripts/common.sh@337 -- # local 'op=<' 00:15:23.842 15:05:54 -- scripts/common.sh@339 -- # ver1_l=2 00:15:23.842 15:05:54 -- scripts/common.sh@340 -- # ver2_l=1 00:15:23.842 15:05:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:23.842 15:05:54 -- scripts/common.sh@343 -- # case "$op" in 00:15:23.842 15:05:54 -- scripts/common.sh@344 -- # : 1 00:15:23.842 15:05:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:23.842 15:05:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:23.842 15:05:54 -- scripts/common.sh@364 -- # decimal 1 00:15:23.842 15:05:54 -- scripts/common.sh@352 -- # local d=1 00:15:23.842 15:05:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:23.842 15:05:54 -- scripts/common.sh@354 -- # echo 1 00:15:23.842 15:05:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:23.842 15:05:54 -- scripts/common.sh@365 -- # decimal 2 00:15:23.842 15:05:54 -- scripts/common.sh@352 -- # local d=2 00:15:23.842 15:05:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:23.842 15:05:54 -- scripts/common.sh@354 -- # echo 2 00:15:23.842 15:05:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:23.842 15:05:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:23.842 15:05:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:23.842 15:05:54 -- scripts/common.sh@367 -- # return 0 00:15:23.842 15:05:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:23.842 15:05:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:23.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.842 --rc genhtml_branch_coverage=1 00:15:23.842 --rc genhtml_function_coverage=1 00:15:23.842 --rc genhtml_legend=1 00:15:23.842 --rc geninfo_all_blocks=1 00:15:23.842 --rc geninfo_unexecuted_blocks=1 00:15:23.842 00:15:23.842 ' 00:15:23.842 15:05:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:23.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.842 --rc genhtml_branch_coverage=1 00:15:23.842 --rc genhtml_function_coverage=1 00:15:23.842 --rc genhtml_legend=1 00:15:23.842 --rc geninfo_all_blocks=1 00:15:23.842 --rc geninfo_unexecuted_blocks=1 00:15:23.842 00:15:23.842 ' 00:15:23.842 15:05:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:23.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.842 --rc genhtml_branch_coverage=1 00:15:23.842 --rc genhtml_function_coverage=1 00:15:23.842 --rc genhtml_legend=1 00:15:23.842 --rc geninfo_all_blocks=1 00:15:23.842 --rc geninfo_unexecuted_blocks=1 00:15:23.842 00:15:23.842 ' 00:15:23.842 15:05:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:23.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.842 --rc genhtml_branch_coverage=1 00:15:23.842 --rc genhtml_function_coverage=1 00:15:23.842 --rc genhtml_legend=1 00:15:23.842 --rc geninfo_all_blocks=1 00:15:23.842 --rc geninfo_unexecuted_blocks=1 00:15:23.842 00:15:23.842 ' 00:15:23.842 15:05:54 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:23.842 15:05:54 -- nvmf/common.sh@7 -- # uname -s 00:15:23.842 15:05:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.842 15:05:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.842 15:05:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.842 15:05:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.842 15:05:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.842 15:05:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.842 15:05:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.842 15:05:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.842 15:05:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:23.842 15:05:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.842 15:05:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:15:23.842 15:05:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:15:23.842 15:05:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.842 15:05:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.842 15:05:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:23.842 15:05:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:23.842 15:05:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.842 15:05:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.842 15:05:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.842 15:05:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.842 15:05:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.842 15:05:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.842 15:05:54 -- paths/export.sh@5 -- # export PATH 00:15:23.842 15:05:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.842 15:05:54 -- nvmf/common.sh@46 -- # : 0 00:15:23.842 15:05:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:23.842 15:05:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:23.842 15:05:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:23.842 15:05:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.842 15:05:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.842 15:05:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:23.842 15:05:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:23.842 15:05:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:23.842 15:05:54 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:23.842 15:05:54 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:23.842 15:05:54 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:23.842 15:05:54 -- host/perf.sh@17 -- # nvmftestinit 00:15:23.842 15:05:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:23.842 15:05:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:23.842 15:05:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:23.842 15:05:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:23.842 15:05:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:23.842 15:05:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.842 15:05:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:23.842 15:05:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.842 15:05:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:23.842 15:05:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:23.842 15:05:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:23.842 15:05:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:23.842 15:05:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:23.842 15:05:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:23.842 15:05:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:23.842 15:05:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:23.842 15:05:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:23.843 15:05:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:23.843 15:05:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:23.843 15:05:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:23.843 15:05:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:23.843 15:05:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:23.843 15:05:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:23.843 15:05:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:23.843 15:05:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:23.843 15:05:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:23.843 15:05:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:23.843 15:05:54 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:23.843 Cannot find device "nvmf_tgt_br" 00:15:23.843 15:05:54 -- nvmf/common.sh@154 -- # true 00:15:23.843 15:05:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:23.843 Cannot find device "nvmf_tgt_br2" 00:15:23.843 15:05:54 -- nvmf/common.sh@155 -- # true 00:15:23.843 15:05:54 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:23.843 15:05:54 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:23.843 Cannot find device "nvmf_tgt_br" 00:15:23.843 15:05:54 -- nvmf/common.sh@157 -- # true 00:15:23.843 15:05:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:23.843 Cannot find device "nvmf_tgt_br2" 00:15:23.843 15:05:54 -- nvmf/common.sh@158 -- # true 00:15:23.843 15:05:54 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:23.843 15:05:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:24.101 15:05:54 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:24.101 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:24.101 15:05:54 -- nvmf/common.sh@161 -- # true 00:15:24.101 15:05:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:24.101 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:24.101 15:05:54 -- nvmf/common.sh@162 -- # true 00:15:24.101 15:05:54 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:24.101 15:05:54 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:24.101 15:05:54 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:24.101 15:05:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:24.101 15:05:54 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:24.101 15:05:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:24.101 15:05:54 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:24.101 15:05:54 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:24.101 15:05:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:24.101 15:05:54 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:24.101 15:05:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:24.101 15:05:54 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:24.101 15:05:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:24.101 15:05:54 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:24.101 15:05:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:24.101 15:05:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:24.101 15:05:54 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:24.101 15:05:54 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:24.101 15:05:54 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:24.101 15:05:54 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:24.101 15:05:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:24.101 15:05:54 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:24.101 15:05:54 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:24.101 15:05:54 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:24.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:24.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:15:24.101 00:15:24.101 --- 10.0.0.2 ping statistics --- 00:15:24.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.101 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:15:24.101 15:05:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:24.101 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:24.102 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:15:24.102 00:15:24.102 --- 10.0.0.3 ping statistics --- 00:15:24.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.102 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:24.102 15:05:54 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:24.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:24.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:15:24.102 00:15:24.102 --- 10.0.0.1 ping statistics --- 00:15:24.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.102 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:24.102 15:05:54 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:24.102 15:05:54 -- nvmf/common.sh@421 -- # return 0 00:15:24.102 15:05:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:24.102 15:05:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:24.102 15:05:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:24.102 15:05:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:24.102 15:05:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:24.102 15:05:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:24.102 15:05:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:24.102 15:05:54 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:15:24.102 15:05:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:24.102 15:05:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:24.102 15:05:54 -- common/autotest_common.sh@10 -- # set +x 00:15:24.102 15:05:54 -- nvmf/common.sh@469 -- # nvmfpid=80504 00:15:24.102 15:05:54 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:24.102 15:05:54 -- nvmf/common.sh@470 -- # waitforlisten 80504 00:15:24.102 15:05:54 -- common/autotest_common.sh@829 -- # '[' -z 80504 ']' 00:15:24.102 15:05:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.102 15:05:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:24.102 15:05:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.102 15:05:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:24.102 15:05:54 -- common/autotest_common.sh@10 -- # set +x 00:15:24.360 [2024-11-20 15:05:54.911121] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:24.360 [2024-11-20 15:05:54.911313] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.360 [2024-11-20 15:05:55.049860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:24.360 [2024-11-20 15:05:55.085984] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:24.360 [2024-11-20 15:05:55.086343] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:24.360 [2024-11-20 15:05:55.086465] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:24.360 [2024-11-20 15:05:55.086586] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:24.360 [2024-11-20 15:05:55.086778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:24.360 [2024-11-20 15:05:55.086896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:24.360 [2024-11-20 15:05:55.086949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.360 [2024-11-20 15:05:55.086949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:24.617 15:05:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:24.617 15:05:55 -- common/autotest_common.sh@862 -- # return 0 00:15:24.618 15:05:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:24.618 15:05:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:24.618 15:05:55 -- common/autotest_common.sh@10 -- # set +x 00:15:24.618 15:05:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:24.618 15:05:55 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:24.618 15:05:55 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:15:25.183 15:05:55 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:15:25.183 15:05:55 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:15:25.183 15:05:55 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:15:25.183 15:05:55 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:25.749 15:05:56 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:15:25.749 15:05:56 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:15:25.749 15:05:56 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:15:25.749 15:05:56 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:15:25.749 15:05:56 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:25.749 [2024-11-20 15:05:56.522143] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.749 15:05:56 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:26.007 15:05:56 -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:26.007 15:05:56 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:26.265 15:05:57 -- host/perf.sh@45 -- # for bdev in $bdevs 00:15:26.265 15:05:57 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:15:26.523 15:05:57 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:26.781 [2024-11-20 15:05:57.523340] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:26.781 15:05:57 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:27.346 15:05:57 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:15:27.346 15:05:57 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:15:27.346 15:05:57 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:15:27.346 15:05:57 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:15:28.281 Initializing NVMe Controllers 00:15:28.281 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:15:28.281 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:15:28.281 Initialization complete. Launching workers. 00:15:28.281 ======================================================== 00:15:28.281 Latency(us) 00:15:28.281 Device Information : IOPS MiB/s Average min max 00:15:28.281 PCIE (0000:00:06.0) NSID 1 from core 0: 26011.22 101.61 1229.82 291.64 10593.28 00:15:28.281 ======================================================== 00:15:28.281 Total : 26011.22 101.61 1229.82 291.64 10593.28 00:15:28.281 00:15:28.281 15:05:59 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:29.686 Initializing NVMe Controllers 00:15:29.686 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:29.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:29.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:29.686 Initialization complete. Launching workers. 00:15:29.686 ======================================================== 00:15:29.686 Latency(us) 00:15:29.686 Device Information : IOPS MiB/s Average min max 00:15:29.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3107.20 12.14 320.19 117.67 6210.07 00:15:29.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.49 0.48 8160.76 5981.05 12022.33 00:15:29.686 ======================================================== 00:15:29.686 Total : 3230.69 12.62 619.89 117.67 12022.33 00:15:29.686 00:15:29.686 15:06:00 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:31.061 Initializing NVMe Controllers 00:15:31.061 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:31.061 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:31.061 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:31.061 Initialization complete. Launching workers. 00:15:31.061 ======================================================== 00:15:31.061 Latency(us) 00:15:31.061 Device Information : IOPS MiB/s Average min max 00:15:31.061 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8402.91 32.82 3807.49 514.23 9792.87 00:15:31.061 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3883.40 15.17 8268.05 6790.30 16597.04 00:15:31.061 ======================================================== 00:15:31.061 Total : 12286.31 47.99 5217.36 514.23 16597.04 00:15:31.061 00:15:31.061 15:06:01 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:15:31.061 15:06:01 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:33.591 Initializing NVMe Controllers 00:15:33.591 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:33.591 Controller IO queue size 128, less than required. 00:15:33.591 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:33.591 Controller IO queue size 128, less than required. 00:15:33.591 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:33.591 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:33.591 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:33.591 Initialization complete. Launching workers. 00:15:33.591 ======================================================== 00:15:33.591 Latency(us) 00:15:33.591 Device Information : IOPS MiB/s Average min max 00:15:33.591 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1288.31 322.08 100629.68 48389.12 247509.79 00:15:33.591 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 548.92 137.23 239125.12 103850.85 525074.63 00:15:33.591 ======================================================== 00:15:33.591 Total : 1837.23 459.31 142008.73 48389.12 525074.63 00:15:33.591 00:15:33.591 15:06:04 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:15:33.849 No valid NVMe controllers or AIO or URING devices found 00:15:33.849 Initializing NVMe Controllers 00:15:33.849 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:33.849 Controller IO queue size 128, less than required. 00:15:33.849 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:33.849 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:15:33.849 Controller IO queue size 128, less than required. 00:15:33.849 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:33.849 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:15:33.849 WARNING: Some requested NVMe devices were skipped 00:15:33.849 15:06:04 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:15:36.420 Initializing NVMe Controllers 00:15:36.420 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:36.420 Controller IO queue size 128, less than required. 00:15:36.420 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:36.420 Controller IO queue size 128, less than required. 00:15:36.420 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:36.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:36.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:36.420 Initialization complete. Launching workers. 00:15:36.420 00:15:36.420 ==================== 00:15:36.420 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:15:36.420 TCP transport: 00:15:36.420 polls: 8122 00:15:36.420 idle_polls: 0 00:15:36.420 sock_completions: 8122 00:15:36.420 nvme_completions: 6329 00:15:36.420 submitted_requests: 9714 00:15:36.420 queued_requests: 1 00:15:36.420 00:15:36.420 ==================== 00:15:36.420 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:15:36.420 TCP transport: 00:15:36.420 polls: 8813 00:15:36.420 idle_polls: 0 00:15:36.420 sock_completions: 8813 00:15:36.420 nvme_completions: 5425 00:15:36.420 submitted_requests: 8289 00:15:36.420 queued_requests: 1 00:15:36.420 ======================================================== 00:15:36.420 Latency(us) 00:15:36.420 Device Information : IOPS MiB/s Average min max 00:15:36.420 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1644.04 411.01 79032.43 43980.59 146201.65 00:15:36.420 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1417.81 354.45 92235.64 41876.98 180422.55 00:15:36.420 ======================================================== 00:15:36.420 Total : 3061.85 765.46 85146.27 41876.98 180422.55 00:15:36.420 00:15:36.420 15:06:06 -- host/perf.sh@66 -- # sync 00:15:36.420 15:06:07 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.678 15:06:07 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:15:36.678 15:06:07 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:15:36.678 15:06:07 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:15:37.244 15:06:07 -- host/perf.sh@72 -- # ls_guid=3bd51387-86a5-4c74-9be7-612907f7d202 00:15:37.244 15:06:07 -- host/perf.sh@73 -- # get_lvs_free_mb 3bd51387-86a5-4c74-9be7-612907f7d202 00:15:37.244 15:06:07 -- common/autotest_common.sh@1353 -- # local lvs_uuid=3bd51387-86a5-4c74-9be7-612907f7d202 00:15:37.244 15:06:07 -- common/autotest_common.sh@1354 -- # local lvs_info 00:15:37.244 15:06:07 -- common/autotest_common.sh@1355 -- # local fc 00:15:37.244 15:06:07 -- common/autotest_common.sh@1356 -- # local cs 00:15:37.244 15:06:07 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:37.502 15:06:08 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:15:37.502 { 00:15:37.502 "uuid": "3bd51387-86a5-4c74-9be7-612907f7d202", 00:15:37.502 "name": "lvs_0", 00:15:37.502 "base_bdev": "Nvme0n1", 00:15:37.502 "total_data_clusters": 1278, 00:15:37.502 "free_clusters": 1278, 00:15:37.502 "block_size": 4096, 00:15:37.502 "cluster_size": 4194304 00:15:37.502 } 00:15:37.502 ]' 00:15:37.502 15:06:08 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="3bd51387-86a5-4c74-9be7-612907f7d202") .free_clusters' 00:15:37.502 15:06:08 -- common/autotest_common.sh@1358 -- # fc=1278 00:15:37.502 15:06:08 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="3bd51387-86a5-4c74-9be7-612907f7d202") .cluster_size' 00:15:37.502 5112 00:15:37.502 15:06:08 -- common/autotest_common.sh@1359 -- # cs=4194304 00:15:37.502 15:06:08 -- common/autotest_common.sh@1362 -- # free_mb=5112 00:15:37.502 15:06:08 -- common/autotest_common.sh@1363 -- # echo 5112 00:15:37.502 15:06:08 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:15:37.502 15:06:08 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3bd51387-86a5-4c74-9be7-612907f7d202 lbd_0 5112 00:15:38.068 15:06:08 -- host/perf.sh@80 -- # lb_guid=7d13da14-6053-4619-97cb-106026bb701e 00:15:38.068 15:06:08 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 7d13da14-6053-4619-97cb-106026bb701e lvs_n_0 00:15:38.326 15:06:09 -- host/perf.sh@83 -- # ls_nested_guid=70ad4cba-72be-4638-9385-f8907f7a67ca 00:15:38.326 15:06:09 -- host/perf.sh@84 -- # get_lvs_free_mb 70ad4cba-72be-4638-9385-f8907f7a67ca 00:15:38.326 15:06:09 -- common/autotest_common.sh@1353 -- # local lvs_uuid=70ad4cba-72be-4638-9385-f8907f7a67ca 00:15:38.326 15:06:09 -- common/autotest_common.sh@1354 -- # local lvs_info 00:15:38.326 15:06:09 -- common/autotest_common.sh@1355 -- # local fc 00:15:38.326 15:06:09 -- common/autotest_common.sh@1356 -- # local cs 00:15:38.326 15:06:09 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:38.585 15:06:09 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:15:38.585 { 00:15:38.585 "uuid": "3bd51387-86a5-4c74-9be7-612907f7d202", 00:15:38.585 "name": "lvs_0", 00:15:38.585 "base_bdev": "Nvme0n1", 00:15:38.585 "total_data_clusters": 1278, 00:15:38.585 "free_clusters": 0, 00:15:38.585 "block_size": 4096, 00:15:38.585 "cluster_size": 4194304 00:15:38.585 }, 00:15:38.585 { 00:15:38.585 "uuid": "70ad4cba-72be-4638-9385-f8907f7a67ca", 00:15:38.585 "name": "lvs_n_0", 00:15:38.585 "base_bdev": "7d13da14-6053-4619-97cb-106026bb701e", 00:15:38.585 "total_data_clusters": 1276, 00:15:38.585 "free_clusters": 1276, 00:15:38.585 "block_size": 4096, 00:15:38.585 "cluster_size": 4194304 00:15:38.585 } 00:15:38.585 ]' 00:15:38.585 15:06:09 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="70ad4cba-72be-4638-9385-f8907f7a67ca") .free_clusters' 00:15:38.585 15:06:09 -- common/autotest_common.sh@1358 -- # fc=1276 00:15:38.585 15:06:09 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="70ad4cba-72be-4638-9385-f8907f7a67ca") .cluster_size' 00:15:38.843 15:06:09 -- common/autotest_common.sh@1359 -- # cs=4194304 00:15:38.843 15:06:09 -- common/autotest_common.sh@1362 -- # free_mb=5104 00:15:38.843 15:06:09 -- common/autotest_common.sh@1363 -- # echo 5104 00:15:38.843 5104 00:15:38.843 15:06:09 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:15:38.843 15:06:09 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 70ad4cba-72be-4638-9385-f8907f7a67ca lbd_nest_0 5104 00:15:39.101 15:06:09 -- host/perf.sh@88 -- # lb_nested_guid=90f78fd1-9efd-48ab-8d09-b597d9d3504d 00:15:39.101 15:06:09 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:39.360 15:06:09 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:15:39.360 15:06:09 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 90f78fd1-9efd-48ab-8d09-b597d9d3504d 00:15:39.618 15:06:10 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:39.876 15:06:10 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:15:39.876 15:06:10 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:15:39.876 15:06:10 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:15:39.876 15:06:10 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:39.876 15:06:10 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:40.528 No valid NVMe controllers or AIO or URING devices found 00:15:40.528 Initializing NVMe Controllers 00:15:40.528 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:40.528 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:15:40.528 WARNING: Some requested NVMe devices were skipped 00:15:40.528 15:06:10 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:40.528 15:06:10 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:50.493 Initializing NVMe Controllers 00:15:50.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:50.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:50.493 Initialization complete. Launching workers. 00:15:50.493 ======================================================== 00:15:50.493 Latency(us) 00:15:50.494 Device Information : IOPS MiB/s Average min max 00:15:50.494 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1082.80 135.35 923.14 316.44 7264.00 00:15:50.494 ======================================================== 00:15:50.494 Total : 1082.80 135.35 923.14 316.44 7264.00 00:15:50.494 00:15:50.494 15:06:21 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:15:50.494 15:06:21 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:50.494 15:06:21 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:50.752 No valid NVMe controllers or AIO or URING devices found 00:15:50.752 Initializing NVMe Controllers 00:15:50.752 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:50.752 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:15:50.752 WARNING: Some requested NVMe devices were skipped 00:15:50.752 15:06:21 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:50.752 15:06:21 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:02.985 Initializing NVMe Controllers 00:16:02.985 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:02.985 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:02.985 Initialization complete. Launching workers. 00:16:02.985 ======================================================== 00:16:02.985 Latency(us) 00:16:02.985 Device Information : IOPS MiB/s Average min max 00:16:02.985 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1314.89 164.36 24359.49 6326.77 67386.85 00:16:02.985 ======================================================== 00:16:02.985 Total : 1314.89 164.36 24359.49 6326.77 67386.85 00:16:02.985 00:16:02.985 15:06:31 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:16:02.985 15:06:31 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:16:02.985 15:06:31 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:02.985 No valid NVMe controllers or AIO or URING devices found 00:16:02.985 Initializing NVMe Controllers 00:16:02.985 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:02.985 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:16:02.985 WARNING: Some requested NVMe devices were skipped 00:16:02.985 15:06:31 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:16:02.985 15:06:31 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:16:13.011 Initializing NVMe Controllers 00:16:13.011 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:13.011 Controller IO queue size 128, less than required. 00:16:13.011 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:13.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:13.011 Initialization complete. Launching workers. 00:16:13.011 ======================================================== 00:16:13.011 Latency(us) 00:16:13.011 Device Information : IOPS MiB/s Average min max 00:16:13.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3568.20 446.02 35956.35 12113.58 115536.44 00:16:13.011 ======================================================== 00:16:13.011 Total : 3568.20 446.02 35956.35 12113.58 115536.44 00:16:13.011 00:16:13.011 15:06:42 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:13.011 15:06:42 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 90f78fd1-9efd-48ab-8d09-b597d9d3504d 00:16:13.011 15:06:43 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:16:13.011 15:06:43 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7d13da14-6053-4619-97cb-106026bb701e 00:16:13.011 15:06:43 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:16:13.292 15:06:43 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:13.292 15:06:43 -- host/perf.sh@114 -- # nvmftestfini 00:16:13.292 15:06:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:13.292 15:06:43 -- nvmf/common.sh@116 -- # sync 00:16:13.292 15:06:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:13.292 15:06:43 -- nvmf/common.sh@119 -- # set +e 00:16:13.292 15:06:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:13.292 15:06:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:13.292 rmmod nvme_tcp 00:16:13.292 rmmod nvme_fabrics 00:16:13.292 rmmod nvme_keyring 00:16:13.292 15:06:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:13.292 15:06:44 -- nvmf/common.sh@123 -- # set -e 00:16:13.292 15:06:44 -- nvmf/common.sh@124 -- # return 0 00:16:13.292 15:06:44 -- nvmf/common.sh@477 -- # '[' -n 80504 ']' 00:16:13.292 15:06:44 -- nvmf/common.sh@478 -- # killprocess 80504 00:16:13.292 15:06:44 -- common/autotest_common.sh@936 -- # '[' -z 80504 ']' 00:16:13.292 15:06:44 -- common/autotest_common.sh@940 -- # kill -0 80504 00:16:13.292 15:06:44 -- common/autotest_common.sh@941 -- # uname 00:16:13.292 15:06:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:13.292 15:06:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80504 00:16:13.292 killing process with pid 80504 00:16:13.292 15:06:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:13.292 15:06:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:13.292 15:06:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80504' 00:16:13.292 15:06:44 -- common/autotest_common.sh@955 -- # kill 80504 00:16:13.292 15:06:44 -- common/autotest_common.sh@960 -- # wait 80504 00:16:14.667 15:06:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:14.667 15:06:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:14.667 15:06:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:14.667 15:06:45 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:14.667 15:06:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:14.667 15:06:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.667 15:06:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.667 15:06:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.667 15:06:45 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:14.667 ************************************ 00:16:14.667 END TEST nvmf_perf 00:16:14.667 ************************************ 00:16:14.667 00:16:14.667 real 0m50.861s 00:16:14.667 user 3m11.592s 00:16:14.667 sys 0m13.395s 00:16:14.667 15:06:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:14.667 15:06:45 -- common/autotest_common.sh@10 -- # set +x 00:16:14.667 15:06:45 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:14.667 15:06:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:14.667 15:06:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:14.667 15:06:45 -- common/autotest_common.sh@10 -- # set +x 00:16:14.667 ************************************ 00:16:14.667 START TEST nvmf_fio_host 00:16:14.667 ************************************ 00:16:14.667 15:06:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:16:14.667 * Looking for test storage... 00:16:14.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:14.667 15:06:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:14.667 15:06:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:14.667 15:06:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:14.667 15:06:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:14.667 15:06:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:14.667 15:06:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:14.667 15:06:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:14.667 15:06:45 -- scripts/common.sh@335 -- # IFS=.-: 00:16:14.667 15:06:45 -- scripts/common.sh@335 -- # read -ra ver1 00:16:14.667 15:06:45 -- scripts/common.sh@336 -- # IFS=.-: 00:16:14.667 15:06:45 -- scripts/common.sh@336 -- # read -ra ver2 00:16:14.667 15:06:45 -- scripts/common.sh@337 -- # local 'op=<' 00:16:14.667 15:06:45 -- scripts/common.sh@339 -- # ver1_l=2 00:16:14.667 15:06:45 -- scripts/common.sh@340 -- # ver2_l=1 00:16:14.667 15:06:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:14.667 15:06:45 -- scripts/common.sh@343 -- # case "$op" in 00:16:14.667 15:06:45 -- scripts/common.sh@344 -- # : 1 00:16:14.667 15:06:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:14.667 15:06:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:14.667 15:06:45 -- scripts/common.sh@364 -- # decimal 1 00:16:14.667 15:06:45 -- scripts/common.sh@352 -- # local d=1 00:16:14.667 15:06:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:14.667 15:06:45 -- scripts/common.sh@354 -- # echo 1 00:16:14.667 15:06:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:14.667 15:06:45 -- scripts/common.sh@365 -- # decimal 2 00:16:14.667 15:06:45 -- scripts/common.sh@352 -- # local d=2 00:16:14.667 15:06:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:14.667 15:06:45 -- scripts/common.sh@354 -- # echo 2 00:16:14.667 15:06:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:14.667 15:06:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:14.667 15:06:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:14.667 15:06:45 -- scripts/common.sh@367 -- # return 0 00:16:14.667 15:06:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:14.667 15:06:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:14.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.667 --rc genhtml_branch_coverage=1 00:16:14.667 --rc genhtml_function_coverage=1 00:16:14.667 --rc genhtml_legend=1 00:16:14.667 --rc geninfo_all_blocks=1 00:16:14.667 --rc geninfo_unexecuted_blocks=1 00:16:14.667 00:16:14.667 ' 00:16:14.667 15:06:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:14.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.667 --rc genhtml_branch_coverage=1 00:16:14.667 --rc genhtml_function_coverage=1 00:16:14.667 --rc genhtml_legend=1 00:16:14.668 --rc geninfo_all_blocks=1 00:16:14.668 --rc geninfo_unexecuted_blocks=1 00:16:14.668 00:16:14.668 ' 00:16:14.668 15:06:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:14.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.668 --rc genhtml_branch_coverage=1 00:16:14.668 --rc genhtml_function_coverage=1 00:16:14.668 --rc genhtml_legend=1 00:16:14.668 --rc geninfo_all_blocks=1 00:16:14.668 --rc geninfo_unexecuted_blocks=1 00:16:14.668 00:16:14.668 ' 00:16:14.668 15:06:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:14.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.668 --rc genhtml_branch_coverage=1 00:16:14.668 --rc genhtml_function_coverage=1 00:16:14.668 --rc genhtml_legend=1 00:16:14.668 --rc geninfo_all_blocks=1 00:16:14.668 --rc geninfo_unexecuted_blocks=1 00:16:14.668 00:16:14.668 ' 00:16:14.668 15:06:45 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:14.668 15:06:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.668 15:06:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.668 15:06:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.668 15:06:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.668 15:06:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.668 15:06:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.668 15:06:45 -- paths/export.sh@5 -- # export PATH 00:16:14.668 15:06:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.668 15:06:45 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:14.668 15:06:45 -- nvmf/common.sh@7 -- # uname -s 00:16:14.668 15:06:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.668 15:06:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.668 15:06:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.668 15:06:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.668 15:06:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.668 15:06:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.668 15:06:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.668 15:06:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.668 15:06:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.668 15:06:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.668 15:06:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:16:14.668 15:06:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:16:14.668 15:06:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.668 15:06:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.668 15:06:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:14.668 15:06:45 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:14.668 15:06:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.668 15:06:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.668 15:06:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.668 15:06:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.668 15:06:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.668 15:06:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.668 15:06:45 -- paths/export.sh@5 -- # export PATH 00:16:14.668 15:06:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.668 15:06:45 -- nvmf/common.sh@46 -- # : 0 00:16:14.668 15:06:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:14.668 15:06:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:14.668 15:06:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:14.668 15:06:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.668 15:06:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.668 15:06:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:14.668 15:06:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:14.668 15:06:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:14.668 15:06:45 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:14.668 15:06:45 -- host/fio.sh@14 -- # nvmftestinit 00:16:14.668 15:06:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:14.668 15:06:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.668 15:06:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:14.668 15:06:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:14.668 15:06:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:14.668 15:06:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.668 15:06:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.668 15:06:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.668 15:06:45 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:14.668 15:06:45 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:14.668 15:06:45 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:14.668 15:06:45 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:14.668 15:06:45 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:14.668 15:06:45 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:14.668 15:06:45 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:14.668 15:06:45 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:14.668 15:06:45 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:14.668 15:06:45 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:14.668 15:06:45 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:14.668 15:06:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:14.668 15:06:45 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:14.668 15:06:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:14.668 15:06:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:14.668 15:06:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:14.668 15:06:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:14.669 15:06:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:14.669 15:06:45 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:14.669 15:06:45 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:14.669 Cannot find device "nvmf_tgt_br" 00:16:14.669 15:06:45 -- nvmf/common.sh@154 -- # true 00:16:14.669 15:06:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:14.927 Cannot find device "nvmf_tgt_br2" 00:16:14.927 15:06:45 -- nvmf/common.sh@155 -- # true 00:16:14.927 15:06:45 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:14.927 15:06:45 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:14.927 Cannot find device "nvmf_tgt_br" 00:16:14.927 15:06:45 -- nvmf/common.sh@157 -- # true 00:16:14.927 15:06:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:14.927 Cannot find device "nvmf_tgt_br2" 00:16:14.927 15:06:45 -- nvmf/common.sh@158 -- # true 00:16:14.927 15:06:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:14.927 15:06:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:14.927 15:06:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:14.927 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:14.927 15:06:45 -- nvmf/common.sh@161 -- # true 00:16:14.927 15:06:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:14.927 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:14.927 15:06:45 -- nvmf/common.sh@162 -- # true 00:16:14.927 15:06:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:14.927 15:06:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:14.927 15:06:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:14.927 15:06:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:14.927 15:06:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:14.927 15:06:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:14.927 15:06:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:14.927 15:06:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:14.927 15:06:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:14.927 15:06:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:14.927 15:06:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:14.927 15:06:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:14.927 15:06:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:14.927 15:06:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:14.927 15:06:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:14.927 15:06:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:14.928 15:06:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:14.928 15:06:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:14.928 15:06:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:14.928 15:06:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:14.928 15:06:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:15.186 15:06:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:15.186 15:06:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:15.186 15:06:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:15.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:15.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:16:15.186 00:16:15.186 --- 10.0.0.2 ping statistics --- 00:16:15.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.186 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:16:15.186 15:06:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:15.186 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:15.186 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:16:15.186 00:16:15.186 --- 10.0.0.3 ping statistics --- 00:16:15.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.186 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:16:15.186 15:06:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:15.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:15.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:15.186 00:16:15.186 --- 10.0.0.1 ping statistics --- 00:16:15.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.186 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:15.186 15:06:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:15.186 15:06:45 -- nvmf/common.sh@421 -- # return 0 00:16:15.186 15:06:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:15.186 15:06:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:15.186 15:06:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:15.186 15:06:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:15.186 15:06:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:15.186 15:06:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:15.186 15:06:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:15.186 15:06:45 -- host/fio.sh@16 -- # [[ y != y ]] 00:16:15.186 15:06:45 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:16:15.186 15:06:45 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:15.186 15:06:45 -- common/autotest_common.sh@10 -- # set +x 00:16:15.186 15:06:45 -- host/fio.sh@24 -- # nvmfpid=81331 00:16:15.186 15:06:45 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:15.186 15:06:45 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:15.186 15:06:45 -- host/fio.sh@28 -- # waitforlisten 81331 00:16:15.186 15:06:45 -- common/autotest_common.sh@829 -- # '[' -z 81331 ']' 00:16:15.186 15:06:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.186 15:06:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:15.186 15:06:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.186 15:06:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:15.186 15:06:45 -- common/autotest_common.sh@10 -- # set +x 00:16:15.186 [2024-11-20 15:06:45.835445] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:15.186 [2024-11-20 15:06:45.835538] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.186 [2024-11-20 15:06:45.971148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:15.444 [2024-11-20 15:06:46.008473] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:15.444 [2024-11-20 15:06:46.008853] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.444 [2024-11-20 15:06:46.008991] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.444 [2024-11-20 15:06:46.009130] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.444 [2024-11-20 15:06:46.009342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.444 [2024-11-20 15:06:46.009490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:15.444 [2024-11-20 15:06:46.009554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:15.444 [2024-11-20 15:06:46.009558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.378 15:06:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:16.378 15:06:46 -- common/autotest_common.sh@862 -- # return 0 00:16:16.378 15:06:46 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:16.378 [2024-11-20 15:06:47.122897] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:16.378 15:06:47 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:16:16.378 15:06:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:16.378 15:06:47 -- common/autotest_common.sh@10 -- # set +x 00:16:16.637 15:06:47 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:16.895 Malloc1 00:16:16.896 15:06:47 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:17.154 15:06:47 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:17.412 15:06:48 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.670 [2024-11-20 15:06:48.360878] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.670 15:06:48 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:17.928 15:06:48 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:16:17.928 15:06:48 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:17.928 15:06:48 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:17.928 15:06:48 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:16:17.928 15:06:48 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:17.928 15:06:48 -- common/autotest_common.sh@1328 -- # local sanitizers 00:16:17.928 15:06:48 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:17.928 15:06:48 -- common/autotest_common.sh@1330 -- # shift 00:16:17.928 15:06:48 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:16:17.928 15:06:48 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:17.928 15:06:48 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:17.928 15:06:48 -- common/autotest_common.sh@1334 -- # grep libasan 00:16:17.928 15:06:48 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:17.928 15:06:48 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:17.928 15:06:48 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:17.928 15:06:48 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:17.928 15:06:48 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:16:17.928 15:06:48 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:17.928 15:06:48 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:17.928 15:06:48 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:17.928 15:06:48 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:17.928 15:06:48 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:17.928 15:06:48 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:18.186 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:18.186 fio-3.35 00:16:18.186 Starting 1 thread 00:16:20.788 00:16:20.788 test: (groupid=0, jobs=1): err= 0: pid=81414: Wed Nov 20 15:06:51 2024 00:16:20.788 read: IOPS=9041, BW=35.3MiB/s (37.0MB/s)(70.8MiB/2006msec) 00:16:20.788 slat (usec): min=2, max=207, avg= 2.59, stdev= 2.14 00:16:20.788 clat (usec): min=1776, max=13523, avg=7358.44, stdev=575.08 00:16:20.788 lat (usec): min=1806, max=13526, avg=7361.03, stdev=574.91 00:16:20.788 clat percentiles (usec): 00:16:20.788 | 1.00th=[ 6259], 5.00th=[ 6587], 10.00th=[ 6718], 20.00th=[ 6980], 00:16:20.788 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7308], 60.00th=[ 7439], 00:16:20.788 | 70.00th=[ 7570], 80.00th=[ 7767], 90.00th=[ 7963], 95.00th=[ 8160], 00:16:20.788 | 99.00th=[ 8848], 99.50th=[ 9372], 99.90th=[12125], 99.95th=[12911], 00:16:20.788 | 99.99th=[13435] 00:16:20.788 bw ( KiB/s): min=34872, max=37024, per=99.91%, avg=36132.00, stdev=927.00, samples=4 00:16:20.788 iops : min= 8718, max= 9256, avg=9033.00, stdev=231.75, samples=4 00:16:20.788 write: IOPS=9057, BW=35.4MiB/s (37.1MB/s)(71.0MiB/2006msec); 0 zone resets 00:16:20.788 slat (usec): min=2, max=139, avg= 2.68, stdev= 1.38 00:16:20.788 clat (usec): min=1520, max=12895, avg=6721.34, stdev=531.41 00:16:20.788 lat (usec): min=1529, max=12898, avg=6724.02, stdev=531.31 00:16:20.788 clat percentiles (usec): 00:16:20.788 | 1.00th=[ 5735], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6390], 00:16:20.788 | 30.00th=[ 6521], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6783], 00:16:20.788 | 70.00th=[ 6915], 80.00th=[ 7046], 90.00th=[ 7242], 95.00th=[ 7439], 00:16:20.788 | 99.00th=[ 8160], 99.50th=[ 8717], 99.90th=[11207], 99.95th=[11338], 00:16:20.788 | 99.99th=[12387] 00:16:20.788 bw ( KiB/s): min=35712, max=36648, per=99.99%, avg=36226.00, stdev=409.45, samples=4 00:16:20.788 iops : min= 8928, max= 9162, avg=9056.50, stdev=102.36, samples=4 00:16:20.788 lat (msec) : 2=0.03%, 4=0.13%, 10=99.55%, 20=0.30% 00:16:20.788 cpu : usr=69.53%, sys=22.64%, ctx=10, majf=0, minf=5 00:16:20.788 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:20.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.788 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:20.788 issued rwts: total=18137,18169,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:20.788 00:16:20.789 Run status group 0 (all jobs): 00:16:20.789 READ: bw=35.3MiB/s (37.0MB/s), 35.3MiB/s-35.3MiB/s (37.0MB/s-37.0MB/s), io=70.8MiB (74.3MB), run=2006-2006msec 00:16:20.789 WRITE: bw=35.4MiB/s (37.1MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.1MB/s), io=71.0MiB (74.4MB), run=2006-2006msec 00:16:20.789 15:06:51 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:20.789 15:06:51 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:20.789 15:06:51 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:16:20.789 15:06:51 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:20.789 15:06:51 -- common/autotest_common.sh@1328 -- # local sanitizers 00:16:20.789 15:06:51 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:20.789 15:06:51 -- common/autotest_common.sh@1330 -- # shift 00:16:20.789 15:06:51 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:16:20.789 15:06:51 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:20.789 15:06:51 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:20.789 15:06:51 -- common/autotest_common.sh@1334 -- # grep libasan 00:16:20.789 15:06:51 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:20.789 15:06:51 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:20.789 15:06:51 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:20.789 15:06:51 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:20.789 15:06:51 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:20.789 15:06:51 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:16:20.789 15:06:51 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:20.789 15:06:51 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:20.789 15:06:51 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:20.789 15:06:51 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:20.789 15:06:51 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:16:20.789 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:16:20.789 fio-3.35 00:16:20.789 Starting 1 thread 00:16:23.318 00:16:23.318 test: (groupid=0, jobs=1): err= 0: pid=81466: Wed Nov 20 15:06:53 2024 00:16:23.318 read: IOPS=7873, BW=123MiB/s (129MB/s)(246MiB/2003msec) 00:16:23.318 slat (usec): min=3, max=130, avg= 4.25, stdev= 2.08 00:16:23.318 clat (usec): min=1860, max=18322, avg=8898.76, stdev=2935.86 00:16:23.318 lat (usec): min=1863, max=18327, avg=8903.01, stdev=2936.13 00:16:23.318 clat percentiles (usec): 00:16:23.318 | 1.00th=[ 4293], 5.00th=[ 5014], 10.00th=[ 5473], 20.00th=[ 6194], 00:16:23.318 | 30.00th=[ 6980], 40.00th=[ 7635], 50.00th=[ 8455], 60.00th=[ 9241], 00:16:23.318 | 70.00th=[10290], 80.00th=[11469], 90.00th=[12911], 95.00th=[14746], 00:16:23.318 | 99.00th=[16712], 99.50th=[17171], 99.90th=[17695], 99.95th=[17957], 00:16:23.318 | 99.99th=[18220] 00:16:23.318 bw ( KiB/s): min=57664, max=66016, per=50.62%, avg=63768.00, stdev=4078.91, samples=4 00:16:23.318 iops : min= 3604, max= 4126, avg=3985.50, stdev=254.93, samples=4 00:16:23.318 write: IOPS=4587, BW=71.7MiB/s (75.2MB/s)(131MiB/1826msec); 0 zone resets 00:16:23.318 slat (usec): min=37, max=264, avg=41.24, stdev= 6.80 00:16:23.318 clat (usec): min=1663, max=23232, avg=12959.92, stdev=2688.04 00:16:23.318 lat (usec): min=1701, max=23286, avg=13001.16, stdev=2690.19 00:16:23.318 clat percentiles (usec): 00:16:23.318 | 1.00th=[ 7898], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10683], 00:16:23.318 | 30.00th=[11338], 40.00th=[11994], 50.00th=[12518], 60.00th=[13173], 00:16:23.318 | 70.00th=[13960], 80.00th=[14877], 90.00th=[16712], 95.00th=[18220], 00:16:23.318 | 99.00th=[20841], 99.50th=[21627], 99.90th=[22676], 99.95th=[22938], 00:16:23.318 | 99.99th=[23200] 00:16:23.318 bw ( KiB/s): min=60416, max=69632, per=90.82%, avg=66656.00, stdev=4262.45, samples=4 00:16:23.318 iops : min= 3776, max= 4352, avg=4166.00, stdev=266.40, samples=4 00:16:23.318 lat (msec) : 2=0.02%, 4=0.41%, 10=46.87%, 20=52.18%, 50=0.52% 00:16:23.318 cpu : usr=77.73%, sys=15.98%, ctx=23, majf=0, minf=1 00:16:23.318 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:16:23.318 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.318 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:23.318 issued rwts: total=15771,8376,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:23.318 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:23.318 00:16:23.318 Run status group 0 (all jobs): 00:16:23.318 READ: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=246MiB (258MB), run=2003-2003msec 00:16:23.318 WRITE: bw=71.7MiB/s (75.2MB/s), 71.7MiB/s-71.7MiB/s (75.2MB/s-75.2MB/s), io=131MiB (137MB), run=1826-1826msec 00:16:23.318 15:06:53 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:23.318 15:06:53 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:16:23.318 15:06:53 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:16:23.318 15:06:53 -- host/fio.sh@51 -- # get_nvme_bdfs 00:16:23.318 15:06:53 -- common/autotest_common.sh@1508 -- # bdfs=() 00:16:23.318 15:06:53 -- common/autotest_common.sh@1508 -- # local bdfs 00:16:23.318 15:06:53 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:23.318 15:06:53 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:23.318 15:06:53 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:16:23.318 15:06:54 -- common/autotest_common.sh@1510 -- # (( 2 == 0 )) 00:16:23.318 15:06:54 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:16:23.318 15:06:54 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:16:23.576 Nvme0n1 00:16:23.576 15:06:54 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:16:24.142 15:06:54 -- host/fio.sh@53 -- # ls_guid=61072bd5-4711-48e7-8480-f6a5fd8f871e 00:16:24.143 15:06:54 -- host/fio.sh@54 -- # get_lvs_free_mb 61072bd5-4711-48e7-8480-f6a5fd8f871e 00:16:24.143 15:06:54 -- common/autotest_common.sh@1353 -- # local lvs_uuid=61072bd5-4711-48e7-8480-f6a5fd8f871e 00:16:24.143 15:06:54 -- common/autotest_common.sh@1354 -- # local lvs_info 00:16:24.143 15:06:54 -- common/autotest_common.sh@1355 -- # local fc 00:16:24.143 15:06:54 -- common/autotest_common.sh@1356 -- # local cs 00:16:24.143 15:06:54 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:24.400 15:06:54 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:16:24.400 { 00:16:24.400 "uuid": "61072bd5-4711-48e7-8480-f6a5fd8f871e", 00:16:24.400 "name": "lvs_0", 00:16:24.400 "base_bdev": "Nvme0n1", 00:16:24.400 "total_data_clusters": 4, 00:16:24.400 "free_clusters": 4, 00:16:24.400 "block_size": 4096, 00:16:24.400 "cluster_size": 1073741824 00:16:24.400 } 00:16:24.400 ]' 00:16:24.400 15:06:54 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="61072bd5-4711-48e7-8480-f6a5fd8f871e") .free_clusters' 00:16:24.400 15:06:55 -- common/autotest_common.sh@1358 -- # fc=4 00:16:24.401 15:06:55 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="61072bd5-4711-48e7-8480-f6a5fd8f871e") .cluster_size' 00:16:24.401 4096 00:16:24.401 15:06:55 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:16:24.401 15:06:55 -- common/autotest_common.sh@1362 -- # free_mb=4096 00:16:24.401 15:06:55 -- common/autotest_common.sh@1363 -- # echo 4096 00:16:24.401 15:06:55 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:16:24.659 b7671504-7715-4e6b-8063-c855c284a00b 00:16:24.659 15:06:55 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:16:24.917 15:06:55 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:16:25.244 15:06:55 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:25.502 15:06:56 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:25.502 15:06:56 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:25.502 15:06:56 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:16:25.502 15:06:56 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:25.502 15:06:56 -- common/autotest_common.sh@1328 -- # local sanitizers 00:16:25.502 15:06:56 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:25.502 15:06:56 -- common/autotest_common.sh@1330 -- # shift 00:16:25.502 15:06:56 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:16:25.502 15:06:56 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:25.502 15:06:56 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:25.502 15:06:56 -- common/autotest_common.sh@1334 -- # grep libasan 00:16:25.502 15:06:56 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:25.502 15:06:56 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:25.502 15:06:56 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:25.502 15:06:56 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:25.502 15:06:56 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:25.502 15:06:56 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:16:25.502 15:06:56 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:25.502 15:06:56 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:25.502 15:06:56 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:25.502 15:06:56 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:25.502 15:06:56 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:25.502 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:25.502 fio-3.35 00:16:25.502 Starting 1 thread 00:16:28.031 00:16:28.031 test: (groupid=0, jobs=1): err= 0: pid=81576: Wed Nov 20 15:06:58 2024 00:16:28.032 read: IOPS=5832, BW=22.8MiB/s (23.9MB/s)(45.7MiB/2008msec) 00:16:28.032 slat (usec): min=2, max=254, avg= 3.94, stdev= 3.99 00:16:28.032 clat (usec): min=2689, max=22367, avg=11453.58, stdev=1862.31 00:16:28.032 lat (usec): min=2697, max=22369, avg=11457.52, stdev=1862.86 00:16:28.032 clat percentiles (usec): 00:16:28.032 | 1.00th=[ 7898], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[ 9896], 00:16:28.032 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11207], 60.00th=[11731], 00:16:28.032 | 70.00th=[12387], 80.00th=[12911], 90.00th=[13829], 95.00th=[14615], 00:16:28.032 | 99.00th=[16909], 99.50th=[17695], 99.90th=[20841], 99.95th=[21365], 00:16:28.032 | 99.99th=[22414] 00:16:28.032 bw ( KiB/s): min=21104, max=26008, per=99.87%, avg=23298.00, stdev=2043.49, samples=4 00:16:28.032 iops : min= 5276, max= 6502, avg=5824.50, stdev=510.87, samples=4 00:16:28.032 write: IOPS=5819, BW=22.7MiB/s (23.8MB/s)(45.6MiB/2008msec); 0 zone resets 00:16:28.032 slat (usec): min=2, max=211, avg= 4.08, stdev= 3.43 00:16:28.032 clat (usec): min=1951, max=19775, avg=10430.82, stdev=1803.49 00:16:28.032 lat (usec): min=1962, max=19778, avg=10434.90, stdev=1804.32 00:16:28.032 clat percentiles (usec): 00:16:28.032 | 1.00th=[ 6194], 5.00th=[ 8225], 10.00th=[ 8586], 20.00th=[ 8979], 00:16:28.032 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10683], 00:16:28.032 | 70.00th=[11338], 80.00th=[11994], 90.00th=[12780], 95.00th=[13566], 00:16:28.032 | 99.00th=[15533], 99.50th=[16188], 99.90th=[17695], 99.95th=[18220], 00:16:28.032 | 99.99th=[19792] 00:16:28.032 bw ( KiB/s): min=20440, max=25536, per=99.84%, avg=23240.00, stdev=2247.92, samples=4 00:16:28.032 iops : min= 5110, max= 6384, avg=5810.00, stdev=561.98, samples=4 00:16:28.032 lat (msec) : 2=0.01%, 4=0.10%, 10=35.12%, 20=64.72%, 50=0.06% 00:16:28.032 cpu : usr=71.90%, sys=21.33%, ctx=8, majf=0, minf=5 00:16:28.032 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:16:28.032 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.032 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:28.032 issued rwts: total=11711,11685,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:28.032 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:28.032 00:16:28.032 Run status group 0 (all jobs): 00:16:28.032 READ: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=45.7MiB (48.0MB), run=2008-2008msec 00:16:28.032 WRITE: bw=22.7MiB/s (23.8MB/s), 22.7MiB/s-22.7MiB/s (23.8MB/s-23.8MB/s), io=45.6MiB (47.9MB), run=2008-2008msec 00:16:28.032 15:06:58 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:28.290 15:06:58 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:16:28.547 15:06:59 -- host/fio.sh@64 -- # ls_nested_guid=c3bf4d65-f9c7-4610-b681-b91e55bbf931 00:16:28.547 15:06:59 -- host/fio.sh@65 -- # get_lvs_free_mb c3bf4d65-f9c7-4610-b681-b91e55bbf931 00:16:28.547 15:06:59 -- common/autotest_common.sh@1353 -- # local lvs_uuid=c3bf4d65-f9c7-4610-b681-b91e55bbf931 00:16:28.547 15:06:59 -- common/autotest_common.sh@1354 -- # local lvs_info 00:16:28.547 15:06:59 -- common/autotest_common.sh@1355 -- # local fc 00:16:28.547 15:06:59 -- common/autotest_common.sh@1356 -- # local cs 00:16:28.547 15:06:59 -- common/autotest_common.sh@1357 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:28.805 15:06:59 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:16:28.805 { 00:16:28.805 "uuid": "61072bd5-4711-48e7-8480-f6a5fd8f871e", 00:16:28.805 "name": "lvs_0", 00:16:28.805 "base_bdev": "Nvme0n1", 00:16:28.805 "total_data_clusters": 4, 00:16:28.805 "free_clusters": 0, 00:16:28.805 "block_size": 4096, 00:16:28.805 "cluster_size": 1073741824 00:16:28.805 }, 00:16:28.805 { 00:16:28.805 "uuid": "c3bf4d65-f9c7-4610-b681-b91e55bbf931", 00:16:28.805 "name": "lvs_n_0", 00:16:28.805 "base_bdev": "b7671504-7715-4e6b-8063-c855c284a00b", 00:16:28.805 "total_data_clusters": 1022, 00:16:28.805 "free_clusters": 1022, 00:16:28.805 "block_size": 4096, 00:16:28.805 "cluster_size": 4194304 00:16:28.805 } 00:16:28.805 ]' 00:16:28.805 15:06:59 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="c3bf4d65-f9c7-4610-b681-b91e55bbf931") .free_clusters' 00:16:28.805 15:06:59 -- common/autotest_common.sh@1358 -- # fc=1022 00:16:28.805 15:06:59 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="c3bf4d65-f9c7-4610-b681-b91e55bbf931") .cluster_size' 00:16:28.805 4088 00:16:28.805 15:06:59 -- common/autotest_common.sh@1359 -- # cs=4194304 00:16:28.805 15:06:59 -- common/autotest_common.sh@1362 -- # free_mb=4088 00:16:28.805 15:06:59 -- common/autotest_common.sh@1363 -- # echo 4088 00:16:28.805 15:06:59 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:16:29.063 ce810cd3-def1-49c2-b0a4-8a5687618ff6 00:16:29.320 15:06:59 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:16:29.578 15:07:00 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:16:29.835 15:07:00 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:16:30.093 15:07:00 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:30.093 15:07:00 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:30.093 15:07:00 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:16:30.093 15:07:00 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:30.093 15:07:00 -- common/autotest_common.sh@1328 -- # local sanitizers 00:16:30.093 15:07:00 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:30.093 15:07:00 -- common/autotest_common.sh@1330 -- # shift 00:16:30.093 15:07:00 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:16:30.093 15:07:00 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:30.093 15:07:00 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:30.093 15:07:00 -- common/autotest_common.sh@1334 -- # grep libasan 00:16:30.093 15:07:00 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:30.093 15:07:00 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:30.093 15:07:00 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:30.093 15:07:00 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:16:30.093 15:07:00 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:16:30.093 15:07:00 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:16:30.093 15:07:00 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:16:30.093 15:07:00 -- common/autotest_common.sh@1334 -- # asan_lib= 00:16:30.093 15:07:00 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:16:30.093 15:07:00 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:16:30.093 15:07:00 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:16:30.351 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:16:30.351 fio-3.35 00:16:30.351 Starting 1 thread 00:16:32.877 00:16:32.877 test: (groupid=0, jobs=1): err= 0: pid=81654: Wed Nov 20 15:07:03 2024 00:16:32.877 read: IOPS=5698, BW=22.3MiB/s (23.3MB/s)(44.7MiB/2009msec) 00:16:32.877 slat (usec): min=2, max=329, avg= 2.94, stdev= 4.15 00:16:32.877 clat (usec): min=3263, max=20287, avg=11741.50, stdev=1136.89 00:16:32.877 lat (usec): min=3279, max=20289, avg=11744.44, stdev=1136.78 00:16:32.877 clat percentiles (usec): 00:16:32.877 | 1.00th=[ 9503], 5.00th=[10159], 10.00th=[10552], 20.00th=[10945], 00:16:32.877 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11863], 00:16:32.877 | 70.00th=[12125], 80.00th=[12518], 90.00th=[13042], 95.00th=[13566], 00:16:32.877 | 99.00th=[15139], 99.50th=[16581], 99.90th=[18744], 99.95th=[18744], 00:16:32.877 | 99.99th=[20317] 00:16:32.877 bw ( KiB/s): min=21504, max=23392, per=99.96%, avg=22784.00, stdev=863.80, samples=4 00:16:32.877 iops : min= 5376, max= 5848, avg=5696.00, stdev=215.95, samples=4 00:16:32.877 write: IOPS=5678, BW=22.2MiB/s (23.3MB/s)(44.6MiB/2009msec); 0 zone resets 00:16:32.877 slat (usec): min=2, max=255, avg= 3.10, stdev= 2.95 00:16:32.877 clat (usec): min=2769, max=18951, avg=10657.13, stdev=1105.71 00:16:32.877 lat (usec): min=2789, max=18954, avg=10660.23, stdev=1105.77 00:16:32.877 clat percentiles (usec): 00:16:32.877 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:16:32.877 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10814], 00:16:32.877 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11863], 95.00th=[12387], 00:16:32.877 | 99.00th=[14484], 99.50th=[15664], 99.90th=[17171], 99.95th=[17695], 00:16:32.877 | 99.99th=[19006] 00:16:32.877 bw ( KiB/s): min=22400, max=23104, per=99.82%, avg=22674.00, stdev=317.75, samples=4 00:16:32.877 iops : min= 5600, max= 5776, avg=5668.50, stdev=79.44, samples=4 00:16:32.877 lat (msec) : 4=0.05%, 10=14.65%, 20=85.29%, 50=0.01% 00:16:32.877 cpu : usr=73.06%, sys=21.26%, ctx=5, majf=0, minf=5 00:16:32.877 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:16:32.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.877 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:32.877 issued rwts: total=11448,11408,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.877 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:32.877 00:16:32.877 Run status group 0 (all jobs): 00:16:32.878 READ: bw=22.3MiB/s (23.3MB/s), 22.3MiB/s-22.3MiB/s (23.3MB/s-23.3MB/s), io=44.7MiB (46.9MB), run=2009-2009msec 00:16:32.878 WRITE: bw=22.2MiB/s (23.3MB/s), 22.2MiB/s-22.2MiB/s (23.3MB/s-23.3MB/s), io=44.6MiB (46.7MB), run=2009-2009msec 00:16:32.878 15:07:03 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:32.878 15:07:03 -- host/fio.sh@74 -- # sync 00:16:32.878 15:07:03 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:16:33.136 15:07:03 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:16:33.395 15:07:04 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:16:33.653 15:07:04 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:16:33.911 15:07:04 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:16:34.844 15:07:05 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:16:34.844 15:07:05 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:16:34.844 15:07:05 -- host/fio.sh@86 -- # nvmftestfini 00:16:34.844 15:07:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:34.844 15:07:05 -- nvmf/common.sh@116 -- # sync 00:16:34.844 15:07:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:34.844 15:07:05 -- nvmf/common.sh@119 -- # set +e 00:16:34.844 15:07:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:34.844 15:07:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:34.844 rmmod nvme_tcp 00:16:34.844 rmmod nvme_fabrics 00:16:34.844 rmmod nvme_keyring 00:16:34.844 15:07:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:34.844 15:07:05 -- nvmf/common.sh@123 -- # set -e 00:16:34.844 15:07:05 -- nvmf/common.sh@124 -- # return 0 00:16:34.844 15:07:05 -- nvmf/common.sh@477 -- # '[' -n 81331 ']' 00:16:34.844 15:07:05 -- nvmf/common.sh@478 -- # killprocess 81331 00:16:34.844 15:07:05 -- common/autotest_common.sh@936 -- # '[' -z 81331 ']' 00:16:34.844 15:07:05 -- common/autotest_common.sh@940 -- # kill -0 81331 00:16:34.844 15:07:05 -- common/autotest_common.sh@941 -- # uname 00:16:34.844 15:07:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:34.844 15:07:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81331 00:16:34.844 15:07:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:34.844 15:07:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:34.844 15:07:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81331' 00:16:34.844 killing process with pid 81331 00:16:34.844 15:07:05 -- common/autotest_common.sh@955 -- # kill 81331 00:16:34.844 15:07:05 -- common/autotest_common.sh@960 -- # wait 81331 00:16:35.102 15:07:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:35.102 15:07:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:35.102 15:07:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:35.102 15:07:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:35.102 15:07:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:35.102 15:07:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.102 15:07:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:35.102 15:07:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.102 15:07:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:35.102 00:16:35.102 real 0m20.552s 00:16:35.102 user 1m30.895s 00:16:35.102 sys 0m4.476s 00:16:35.102 15:07:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:35.102 15:07:05 -- common/autotest_common.sh@10 -- # set +x 00:16:35.102 ************************************ 00:16:35.102 END TEST nvmf_fio_host 00:16:35.102 ************************************ 00:16:35.102 15:07:05 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:35.102 15:07:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:35.102 15:07:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:35.102 15:07:05 -- common/autotest_common.sh@10 -- # set +x 00:16:35.102 ************************************ 00:16:35.102 START TEST nvmf_failover 00:16:35.102 ************************************ 00:16:35.102 15:07:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:16:35.102 * Looking for test storage... 00:16:35.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:35.102 15:07:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:35.102 15:07:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:35.102 15:07:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:35.361 15:07:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:35.361 15:07:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:35.361 15:07:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:35.361 15:07:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:35.361 15:07:05 -- scripts/common.sh@335 -- # IFS=.-: 00:16:35.361 15:07:05 -- scripts/common.sh@335 -- # read -ra ver1 00:16:35.361 15:07:05 -- scripts/common.sh@336 -- # IFS=.-: 00:16:35.361 15:07:05 -- scripts/common.sh@336 -- # read -ra ver2 00:16:35.361 15:07:05 -- scripts/common.sh@337 -- # local 'op=<' 00:16:35.361 15:07:05 -- scripts/common.sh@339 -- # ver1_l=2 00:16:35.361 15:07:05 -- scripts/common.sh@340 -- # ver2_l=1 00:16:35.361 15:07:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:35.361 15:07:05 -- scripts/common.sh@343 -- # case "$op" in 00:16:35.361 15:07:05 -- scripts/common.sh@344 -- # : 1 00:16:35.361 15:07:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:35.361 15:07:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:35.361 15:07:05 -- scripts/common.sh@364 -- # decimal 1 00:16:35.361 15:07:05 -- scripts/common.sh@352 -- # local d=1 00:16:35.361 15:07:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:35.361 15:07:05 -- scripts/common.sh@354 -- # echo 1 00:16:35.361 15:07:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:35.361 15:07:05 -- scripts/common.sh@365 -- # decimal 2 00:16:35.361 15:07:05 -- scripts/common.sh@352 -- # local d=2 00:16:35.361 15:07:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:35.361 15:07:05 -- scripts/common.sh@354 -- # echo 2 00:16:35.361 15:07:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:35.361 15:07:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:35.361 15:07:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:35.361 15:07:05 -- scripts/common.sh@367 -- # return 0 00:16:35.361 15:07:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:35.361 15:07:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:35.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.361 --rc genhtml_branch_coverage=1 00:16:35.361 --rc genhtml_function_coverage=1 00:16:35.361 --rc genhtml_legend=1 00:16:35.361 --rc geninfo_all_blocks=1 00:16:35.361 --rc geninfo_unexecuted_blocks=1 00:16:35.361 00:16:35.361 ' 00:16:35.361 15:07:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:35.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.361 --rc genhtml_branch_coverage=1 00:16:35.361 --rc genhtml_function_coverage=1 00:16:35.361 --rc genhtml_legend=1 00:16:35.361 --rc geninfo_all_blocks=1 00:16:35.361 --rc geninfo_unexecuted_blocks=1 00:16:35.361 00:16:35.361 ' 00:16:35.361 15:07:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:35.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.361 --rc genhtml_branch_coverage=1 00:16:35.361 --rc genhtml_function_coverage=1 00:16:35.361 --rc genhtml_legend=1 00:16:35.361 --rc geninfo_all_blocks=1 00:16:35.361 --rc geninfo_unexecuted_blocks=1 00:16:35.361 00:16:35.361 ' 00:16:35.361 15:07:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:35.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.362 --rc genhtml_branch_coverage=1 00:16:35.362 --rc genhtml_function_coverage=1 00:16:35.362 --rc genhtml_legend=1 00:16:35.362 --rc geninfo_all_blocks=1 00:16:35.362 --rc geninfo_unexecuted_blocks=1 00:16:35.362 00:16:35.362 ' 00:16:35.362 15:07:05 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:35.362 15:07:05 -- nvmf/common.sh@7 -- # uname -s 00:16:35.362 15:07:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.362 15:07:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.362 15:07:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.362 15:07:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.362 15:07:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:35.362 15:07:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:35.362 15:07:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.362 15:07:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:35.362 15:07:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.362 15:07:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:35.362 15:07:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:16:35.362 15:07:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:16:35.362 15:07:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.362 15:07:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:35.362 15:07:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:35.362 15:07:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:35.362 15:07:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.362 15:07:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.362 15:07:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.362 15:07:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.362 15:07:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.362 15:07:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.362 15:07:06 -- paths/export.sh@5 -- # export PATH 00:16:35.362 15:07:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.362 15:07:06 -- nvmf/common.sh@46 -- # : 0 00:16:35.362 15:07:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:35.362 15:07:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:35.362 15:07:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:35.362 15:07:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.362 15:07:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.362 15:07:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:35.362 15:07:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:35.362 15:07:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:35.362 15:07:06 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:35.362 15:07:06 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:35.362 15:07:06 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:35.362 15:07:06 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:35.362 15:07:06 -- host/failover.sh@18 -- # nvmftestinit 00:16:35.362 15:07:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:35.362 15:07:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.362 15:07:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:35.362 15:07:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:35.362 15:07:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:35.362 15:07:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.362 15:07:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:35.362 15:07:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:35.362 15:07:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:35.362 15:07:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:35.362 15:07:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:35.362 15:07:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:35.362 15:07:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:35.362 15:07:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:35.362 15:07:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:35.362 15:07:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:35.362 15:07:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:35.362 15:07:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:35.362 15:07:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:35.362 15:07:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:35.362 15:07:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:35.362 15:07:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:35.362 15:07:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:35.362 15:07:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:35.362 15:07:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:35.362 15:07:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:35.362 15:07:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:35.362 15:07:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:35.362 Cannot find device "nvmf_tgt_br" 00:16:35.362 15:07:06 -- nvmf/common.sh@154 -- # true 00:16:35.362 15:07:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:35.362 Cannot find device "nvmf_tgt_br2" 00:16:35.362 15:07:06 -- nvmf/common.sh@155 -- # true 00:16:35.362 15:07:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:35.362 15:07:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:35.362 Cannot find device "nvmf_tgt_br" 00:16:35.362 15:07:06 -- nvmf/common.sh@157 -- # true 00:16:35.362 15:07:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:35.362 Cannot find device "nvmf_tgt_br2" 00:16:35.362 15:07:06 -- nvmf/common.sh@158 -- # true 00:16:35.362 15:07:06 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:35.362 15:07:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:35.362 15:07:06 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:35.362 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.362 15:07:06 -- nvmf/common.sh@161 -- # true 00:16:35.362 15:07:06 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:35.362 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:35.362 15:07:06 -- nvmf/common.sh@162 -- # true 00:16:35.362 15:07:06 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:35.362 15:07:06 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:35.362 15:07:06 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:35.621 15:07:06 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:35.621 15:07:06 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:35.621 15:07:06 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:35.621 15:07:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:35.621 15:07:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:35.621 15:07:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:35.621 15:07:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:35.621 15:07:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:35.621 15:07:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:35.621 15:07:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:35.621 15:07:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:35.621 15:07:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:35.621 15:07:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:35.621 15:07:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:35.621 15:07:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:35.621 15:07:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:35.621 15:07:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:35.621 15:07:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:35.621 15:07:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:35.621 15:07:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:35.621 15:07:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:35.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:16:35.621 00:16:35.621 --- 10.0.0.2 ping statistics --- 00:16:35.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.621 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:16:35.621 15:07:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:35.621 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:35.621 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:16:35.621 00:16:35.621 --- 10.0.0.3 ping statistics --- 00:16:35.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.621 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:35.621 15:07:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:35.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:35.621 00:16:35.621 --- 10.0.0.1 ping statistics --- 00:16:35.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.621 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:35.621 15:07:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.621 15:07:06 -- nvmf/common.sh@421 -- # return 0 00:16:35.621 15:07:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:35.621 15:07:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.621 15:07:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:35.621 15:07:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:35.621 15:07:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.621 15:07:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:35.621 15:07:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:35.621 15:07:06 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:16:35.621 15:07:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:35.621 15:07:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:35.621 15:07:06 -- common/autotest_common.sh@10 -- # set +x 00:16:35.621 15:07:06 -- nvmf/common.sh@469 -- # nvmfpid=81904 00:16:35.621 15:07:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:35.621 15:07:06 -- nvmf/common.sh@470 -- # waitforlisten 81904 00:16:35.621 15:07:06 -- common/autotest_common.sh@829 -- # '[' -z 81904 ']' 00:16:35.621 15:07:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.621 15:07:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:35.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.621 15:07:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.621 15:07:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:35.621 15:07:06 -- common/autotest_common.sh@10 -- # set +x 00:16:35.621 [2024-11-20 15:07:06.404737] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:35.621 [2024-11-20 15:07:06.404834] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.879 [2024-11-20 15:07:06.541774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:35.879 [2024-11-20 15:07:06.577894] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:35.879 [2024-11-20 15:07:06.578037] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.879 [2024-11-20 15:07:06.578051] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.879 [2024-11-20 15:07:06.578060] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.879 [2024-11-20 15:07:06.578212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.879 [2024-11-20 15:07:06.578300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:35.879 [2024-11-20 15:07:06.578308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.875 15:07:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:36.875 15:07:07 -- common/autotest_common.sh@862 -- # return 0 00:16:36.875 15:07:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:36.875 15:07:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:36.875 15:07:07 -- common/autotest_common.sh@10 -- # set +x 00:16:36.875 15:07:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.875 15:07:07 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:37.133 [2024-11-20 15:07:07.681836] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:37.133 15:07:07 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:16:37.391 Malloc0 00:16:37.391 15:07:08 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:37.650 15:07:08 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:37.908 15:07:08 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:38.166 [2024-11-20 15:07:08.807795] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:38.166 15:07:08 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:38.424 [2024-11-20 15:07:09.124219] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:38.424 15:07:09 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:38.682 [2024-11-20 15:07:09.388486] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:38.682 15:07:09 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:16:38.682 15:07:09 -- host/failover.sh@31 -- # bdevperf_pid=81962 00:16:38.682 15:07:09 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:38.682 15:07:09 -- host/failover.sh@34 -- # waitforlisten 81962 /var/tmp/bdevperf.sock 00:16:38.682 15:07:09 -- common/autotest_common.sh@829 -- # '[' -z 81962 ']' 00:16:38.682 15:07:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:38.682 15:07:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:38.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:38.682 15:07:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:38.682 15:07:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:38.682 15:07:09 -- common/autotest_common.sh@10 -- # set +x 00:16:38.940 15:07:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:38.940 15:07:09 -- common/autotest_common.sh@862 -- # return 0 00:16:38.940 15:07:09 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:39.508 NVMe0n1 00:16:39.508 15:07:10 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:39.767 00:16:39.767 15:07:10 -- host/failover.sh@39 -- # run_test_pid=81978 00:16:39.767 15:07:10 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:39.767 15:07:10 -- host/failover.sh@41 -- # sleep 1 00:16:40.703 15:07:11 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:40.961 [2024-11-20 15:07:11.717574] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbac2b0 is same with the state(5) to be set 00:16:40.961 [2024-11-20 15:07:11.717633] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbac2b0 is same with the state(5) to be set 00:16:40.961 [2024-11-20 15:07:11.717660] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbac2b0 is same with the state(5) to be set 00:16:40.961 [2024-11-20 15:07:11.717669] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbac2b0 is same with the state(5) to be set 00:16:40.961 [2024-11-20 15:07:11.717678] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbac2b0 is same with the state(5) to be set 00:16:40.961 [2024-11-20 15:07:11.717686] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbac2b0 is same with the state(5) to be set 00:16:40.961 [2024-11-20 15:07:11.717695] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbac2b0 is same with the state(5) to be set 00:16:40.961 [2024-11-20 15:07:11.717703] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbac2b0 is same with the state(5) to be set 00:16:40.961 [2024-11-20 15:07:11.717711] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbac2b0 is same with the state(5) to be set 00:16:40.961 [2024-11-20 15:07:11.717720] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbac2b0 is same with the state(5) to be set 00:16:40.961 [2024-11-20 15:07:11.717728] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbac2b0 is same with the state(5) to be set 00:16:40.961 [2024-11-20 15:07:11.717736] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbac2b0 is same with the state(5) to be set 00:16:40.961 [2024-11-20 15:07:11.717745] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbac2b0 is same with the state(5) to be set 00:16:40.961 [2024-11-20 15:07:11.717753] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbac2b0 is same with the state(5) to be set 00:16:40.961 [2024-11-20 15:07:11.717761] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbac2b0 is same with the state(5) to be set 00:16:40.961 [2024-11-20 15:07:11.717769] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbac2b0 is same with the state(5) to be set 00:16:40.961 [2024-11-20 15:07:11.717777] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbac2b0 is same with the state(5) to be set 00:16:40.961 [2024-11-20 15:07:11.717786] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbac2b0 is same with the state(5) to be set 00:16:40.961 [2024-11-20 15:07:11.717794] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbac2b0 is same with the state(5) to be set 00:16:40.961 [2024-11-20 15:07:11.717802] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbac2b0 is same with the state(5) to be set 00:16:40.961 [2024-11-20 15:07:11.717810] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbac2b0 is same with the state(5) to be set 00:16:40.961 [2024-11-20 15:07:11.717819] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbac2b0 is same with the state(5) to be set 00:16:40.961 [2024-11-20 15:07:11.718208] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbac2b0 is same with the state(5) to be set 00:16:40.961 [2024-11-20 15:07:11.718221] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbac2b0 is same with the state(5) to be set 00:16:40.961 15:07:11 -- host/failover.sh@45 -- # sleep 3 00:16:44.281 15:07:14 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:44.539 00:16:44.539 15:07:15 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:44.798 [2024-11-20 15:07:15.409236] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409283] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409300] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409314] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409330] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409344] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409357] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409369] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409381] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409395] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409409] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409422] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409435] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409448] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409461] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409475] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409488] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409502] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409516] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409529] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409542] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409555] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409569] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409582] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409595] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409609] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409622] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409634] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409678] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409692] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 [2024-11-20 15:07:15.409707] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f86b0 is same with the state(5) to be set 00:16:44.798 15:07:15 -- host/failover.sh@50 -- # sleep 3 00:16:48.081 15:07:18 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:48.081 [2024-11-20 15:07:18.720777] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:48.081 15:07:18 -- host/failover.sh@55 -- # sleep 1 00:16:49.011 15:07:19 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:49.269 [2024-11-20 15:07:20.035908] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9fb20 is same with the state(5) to be set 00:16:49.270 [2024-11-20 15:07:20.035952] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9fb20 is same with the state(5) to be set 00:16:49.270 [2024-11-20 15:07:20.035964] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9fb20 is same with the state(5) to be set 00:16:49.270 [2024-11-20 15:07:20.035973] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9fb20 is same with the state(5) to be set 00:16:49.270 [2024-11-20 15:07:20.035982] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9fb20 is same with the state(5) to be set 00:16:49.270 [2024-11-20 15:07:20.035990] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9fb20 is same with the state(5) to be set 00:16:49.270 [2024-11-20 15:07:20.036001] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9fb20 is same with the state(5) to be set 00:16:49.270 [2024-11-20 15:07:20.036010] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9fb20 is same with the state(5) to be set 00:16:49.270 [2024-11-20 15:07:20.036018] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9fb20 is same with the state(5) to be set 00:16:49.270 [2024-11-20 15:07:20.036026] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9fb20 is same with the state(5) to be set 00:16:49.270 [2024-11-20 15:07:20.036034] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9fb20 is same with the state(5) to be set 00:16:49.270 [2024-11-20 15:07:20.036042] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9fb20 is same with the state(5) to be set 00:16:49.270 [2024-11-20 15:07:20.036051] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9fb20 is same with the state(5) to be set 00:16:49.270 [2024-11-20 15:07:20.036059] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9fb20 is same with the state(5) to be set 00:16:49.270 [2024-11-20 15:07:20.036067] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9fb20 is same with the state(5) to be set 00:16:49.270 15:07:20 -- host/failover.sh@59 -- # wait 81978 00:16:55.832 0 00:16:55.832 15:07:25 -- host/failover.sh@61 -- # killprocess 81962 00:16:55.832 15:07:25 -- common/autotest_common.sh@936 -- # '[' -z 81962 ']' 00:16:55.832 15:07:25 -- common/autotest_common.sh@940 -- # kill -0 81962 00:16:55.832 15:07:25 -- common/autotest_common.sh@941 -- # uname 00:16:55.832 15:07:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:55.832 15:07:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81962 00:16:55.832 killing process with pid 81962 00:16:55.832 15:07:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:55.832 15:07:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:55.832 15:07:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81962' 00:16:55.832 15:07:25 -- common/autotest_common.sh@955 -- # kill 81962 00:16:55.832 15:07:25 -- common/autotest_common.sh@960 -- # wait 81962 00:16:55.832 15:07:25 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:55.832 [2024-11-20 15:07:09.461807] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:55.832 [2024-11-20 15:07:09.461980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81962 ] 00:16:55.832 [2024-11-20 15:07:09.603069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.832 [2024-11-20 15:07:09.644965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.832 Running I/O for 15 seconds... 00:16:55.832 [2024-11-20 15:07:11.718301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.832 [2024-11-20 15:07:11.718356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-11-20 15:07:11.718385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.832 [2024-11-20 15:07:11.718402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-11-20 15:07:11.718418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.832 [2024-11-20 15:07:11.718433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-11-20 15:07:11.718449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.832 [2024-11-20 15:07:11.718464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-11-20 15:07:11.718480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.832 [2024-11-20 15:07:11.718495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-11-20 15:07:11.718510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.832 [2024-11-20 15:07:11.718525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-11-20 15:07:11.718541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.832 [2024-11-20 15:07:11.718555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-11-20 15:07:11.718571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.832 [2024-11-20 15:07:11.718585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-11-20 15:07:11.718601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.832 [2024-11-20 15:07:11.718615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-11-20 15:07:11.718631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.832 [2024-11-20 15:07:11.718661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-11-20 15:07:11.718679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.832 [2024-11-20 15:07:11.718701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-11-20 15:07:11.718743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.832 [2024-11-20 15:07:11.718759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-11-20 15:07:11.718775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.832 [2024-11-20 15:07:11.718788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-11-20 15:07:11.718804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.832 [2024-11-20 15:07:11.718817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-11-20 15:07:11.718837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.832 [2024-11-20 15:07:11.718851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-11-20 15:07:11.718867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.832 [2024-11-20 15:07:11.718881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-11-20 15:07:11.718897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.832 [2024-11-20 15:07:11.718911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-11-20 15:07:11.718926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:108040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.832 [2024-11-20 15:07:11.718940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-11-20 15:07:11.718956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.832 [2024-11-20 15:07:11.718970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-11-20 15:07:11.718985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:108056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.832 [2024-11-20 15:07:11.718999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-11-20 15:07:11.719015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.832 [2024-11-20 15:07:11.719028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-11-20 15:07:11.719044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.832 [2024-11-20 15:07:11.719057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-11-20 15:07:11.719073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.832 [2024-11-20 15:07:11.719087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-11-20 15:07:11.719103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.832 [2024-11-20 15:07:11.719141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.832 [2024-11-20 15:07:11.719160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:108096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.833 [2024-11-20 15:07:11.719174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.719190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.833 [2024-11-20 15:07:11.719204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.719220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.833 [2024-11-20 15:07:11.719233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.719262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.833 [2024-11-20 15:07:11.719288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.719309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.833 [2024-11-20 15:07:11.719324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.719340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.833 [2024-11-20 15:07:11.719354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.719372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.833 [2024-11-20 15:07:11.719386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.719401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.833 [2024-11-20 15:07:11.719415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.719431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.833 [2024-11-20 15:07:11.719445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.719460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.833 [2024-11-20 15:07:11.719474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.719490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.833 [2024-11-20 15:07:11.719503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.719519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.833 [2024-11-20 15:07:11.719532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.719557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.833 [2024-11-20 15:07:11.719573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.719588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.833 [2024-11-20 15:07:11.719604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.719620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.833 [2024-11-20 15:07:11.719633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.719664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.833 [2024-11-20 15:07:11.719678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.719694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.833 [2024-11-20 15:07:11.719708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.719724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.833 [2024-11-20 15:07:11.719738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.719753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:108176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.833 [2024-11-20 15:07:11.719767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.719783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.833 [2024-11-20 15:07:11.719796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.719812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.833 [2024-11-20 15:07:11.719825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.719841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.833 [2024-11-20 15:07:11.719855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.719882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.833 [2024-11-20 15:07:11.719899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.719915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.833 [2024-11-20 15:07:11.719933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.719950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.833 [2024-11-20 15:07:11.719972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.719989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.833 [2024-11-20 15:07:11.720003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.720019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.833 [2024-11-20 15:07:11.720033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.720048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.833 [2024-11-20 15:07:11.720063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.720078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.833 [2024-11-20 15:07:11.720093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.720108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.833 [2024-11-20 15:07:11.720122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.720138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.833 [2024-11-20 15:07:11.720152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.720168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.833 [2024-11-20 15:07:11.720182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.833 [2024-11-20 15:07:11.720197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.834 [2024-11-20 15:07:11.720211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.720227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.834 [2024-11-20 15:07:11.720241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.720257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.834 [2024-11-20 15:07:11.720283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.720303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:108248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.834 [2024-11-20 15:07:11.720318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.720334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:108256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.834 [2024-11-20 15:07:11.720348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.720364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:108264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.834 [2024-11-20 15:07:11.720386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.720403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.834 [2024-11-20 15:07:11.720417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.720433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.834 [2024-11-20 15:07:11.720447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.720463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:108288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.834 [2024-11-20 15:07:11.720477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.720493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.834 [2024-11-20 15:07:11.720507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.720522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.834 [2024-11-20 15:07:11.720536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.720552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.834 [2024-11-20 15:07:11.720566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.720582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:108320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.834 [2024-11-20 15:07:11.720596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.720612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.834 [2024-11-20 15:07:11.720626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.720655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.834 [2024-11-20 15:07:11.720672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.720688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.834 [2024-11-20 15:07:11.720702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.720717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.834 [2024-11-20 15:07:11.720731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.720746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.834 [2024-11-20 15:07:11.720760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.720784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.834 [2024-11-20 15:07:11.720799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.720815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.834 [2024-11-20 15:07:11.720829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.720845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:107816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.834 [2024-11-20 15:07:11.720858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.720874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.834 [2024-11-20 15:07:11.720888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.720903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.834 [2024-11-20 15:07:11.720918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.720934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.834 [2024-11-20 15:07:11.720948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.720964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:108352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.834 [2024-11-20 15:07:11.720977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.720993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.834 [2024-11-20 15:07:11.721007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.721023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.834 [2024-11-20 15:07:11.721036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.721052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.834 [2024-11-20 15:07:11.721066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.721082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:108384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.834 [2024-11-20 15:07:11.721096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.721111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:108392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.834 [2024-11-20 15:07:11.721125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.721141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.834 [2024-11-20 15:07:11.721165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.721182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.834 [2024-11-20 15:07:11.721196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.834 [2024-11-20 15:07:11.721212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.834 [2024-11-20 15:07:11.721225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.721241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.835 [2024-11-20 15:07:11.721257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.721286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.835 [2024-11-20 15:07:11.721306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.721322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.835 [2024-11-20 15:07:11.721336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.721352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.835 [2024-11-20 15:07:11.721366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.721381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:108456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.835 [2024-11-20 15:07:11.721395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.721411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.835 [2024-11-20 15:07:11.721425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.721441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.835 [2024-11-20 15:07:11.721455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.721470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:107840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.835 [2024-11-20 15:07:11.721483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.721499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.835 [2024-11-20 15:07:11.721512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.721528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:107880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.835 [2024-11-20 15:07:11.721542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.721565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.835 [2024-11-20 15:07:11.721580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.721596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.835 [2024-11-20 15:07:11.721609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.721625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.835 [2024-11-20 15:07:11.721656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.721676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.835 [2024-11-20 15:07:11.721690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.721706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.835 [2024-11-20 15:07:11.721720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.721735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:108480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.835 [2024-11-20 15:07:11.721749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.721764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.835 [2024-11-20 15:07:11.721778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.721796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:108496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.835 [2024-11-20 15:07:11.721811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.721828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.835 [2024-11-20 15:07:11.721842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.721858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.835 [2024-11-20 15:07:11.721871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.721887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.835 [2024-11-20 15:07:11.721901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.721916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.835 [2024-11-20 15:07:11.721930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.721946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:108536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.835 [2024-11-20 15:07:11.721967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.721984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:108544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.835 [2024-11-20 15:07:11.721998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.722013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.835 [2024-11-20 15:07:11.722027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.722042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.835 [2024-11-20 15:07:11.722056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.722072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.835 [2024-11-20 15:07:11.722086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.722101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:108576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.835 [2024-11-20 15:07:11.722115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.722130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.835 [2024-11-20 15:07:11.722144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.722160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.835 [2024-11-20 15:07:11.722175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.722191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.835 [2024-11-20 15:07:11.722204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.722220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.835 [2024-11-20 15:07:11.722234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.835 [2024-11-20 15:07:11.722249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:108616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.836 [2024-11-20 15:07:11.722271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:11.722298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.836 [2024-11-20 15:07:11.722314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:11.722331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.836 [2024-11-20 15:07:11.722344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:11.722368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:108000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.836 [2024-11-20 15:07:11.722382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:11.722398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.836 [2024-11-20 15:07:11.722412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:11.722428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.836 [2024-11-20 15:07:11.722441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:11.722456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14d4a40 is same with the state(5) to be set 00:16:55.836 [2024-11-20 15:07:11.722474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:55.836 [2024-11-20 15:07:11.722484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:55.836 [2024-11-20 15:07:11.722495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108024 len:8 PRP1 0x0 PRP2 0x0 00:16:55.836 [2024-11-20 15:07:11.722508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:11.722583] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14d4a40 was disconnected and freed. reset controller. 00:16:55.836 [2024-11-20 15:07:11.722610] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:55.836 [2024-11-20 15:07:11.722686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.836 [2024-11-20 15:07:11.722711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:11.722727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.836 [2024-11-20 15:07:11.722740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:11.722754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.836 [2024-11-20 15:07:11.722768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:11.722782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.836 [2024-11-20 15:07:11.722796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:11.722809] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:55.836 [2024-11-20 15:07:11.722870] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0d40 (9): Bad file descriptor 00:16:55.836 [2024-11-20 15:07:11.725566] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:55.836 [2024-11-20 15:07:11.759744] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:55.836 [2024-11-20 15:07:15.408595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.836 [2024-11-20 15:07:15.408685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:15.408738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.836 [2024-11-20 15:07:15.408756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:15.408771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.836 [2024-11-20 15:07:15.408784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:15.408799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.836 [2024-11-20 15:07:15.408812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:15.408826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0d40 is same with the state(5) to be set 00:16:55.836 [2024-11-20 15:07:15.409777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.836 [2024-11-20 15:07:15.409808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:15.409834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:87424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.836 [2024-11-20 15:07:15.409850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:15.409867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.836 [2024-11-20 15:07:15.409881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:15.409897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.836 [2024-11-20 15:07:15.409911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:15.409926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.836 [2024-11-20 15:07:15.409940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:15.409956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:87480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.836 [2024-11-20 15:07:15.409970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:15.409985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:88096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.836 [2024-11-20 15:07:15.409999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:15.410015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.836 [2024-11-20 15:07:15.410029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:15.410044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:88112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.836 [2024-11-20 15:07:15.410058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:15.410074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:88120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.836 [2024-11-20 15:07:15.410101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:15.410118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:88128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.836 [2024-11-20 15:07:15.410132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:15.410148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:88144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.836 [2024-11-20 15:07:15.410162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:15.410178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:88152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.836 [2024-11-20 15:07:15.410192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.836 [2024-11-20 15:07:15.410208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:88160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.837 [2024-11-20 15:07:15.410222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.410237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.837 [2024-11-20 15:07:15.410251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.410267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.837 [2024-11-20 15:07:15.410281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.410296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.837 [2024-11-20 15:07:15.410313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.410329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.837 [2024-11-20 15:07:15.410343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.410359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.837 [2024-11-20 15:07:15.410373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.410388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.837 [2024-11-20 15:07:15.410402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.410418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.837 [2024-11-20 15:07:15.410432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.410448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.837 [2024-11-20 15:07:15.410463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.410486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:88200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.837 [2024-11-20 15:07:15.410501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.410518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.837 [2024-11-20 15:07:15.410531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.410548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.837 [2024-11-20 15:07:15.410561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.410577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.837 [2024-11-20 15:07:15.410591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.410607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.837 [2024-11-20 15:07:15.410621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.410650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.837 [2024-11-20 15:07:15.410668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.410684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.837 [2024-11-20 15:07:15.410698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.410714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.837 [2024-11-20 15:07:15.410727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.410743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.837 [2024-11-20 15:07:15.410757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.410772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.837 [2024-11-20 15:07:15.410786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.410802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.837 [2024-11-20 15:07:15.410816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.410832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:88288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.837 [2024-11-20 15:07:15.410846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.410862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.837 [2024-11-20 15:07:15.410889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.410906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.837 [2024-11-20 15:07:15.410920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.410936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.837 [2024-11-20 15:07:15.410950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.410965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.837 [2024-11-20 15:07:15.410979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.410994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.837 [2024-11-20 15:07:15.411008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.411024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.837 [2024-11-20 15:07:15.411037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.837 [2024-11-20 15:07:15.411053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.837 [2024-11-20 15:07:15.411067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.838 [2024-11-20 15:07:15.411096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.838 [2024-11-20 15:07:15.411143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.838 [2024-11-20 15:07:15.411185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.838 [2024-11-20 15:07:15.411215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.838 [2024-11-20 15:07:15.411245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.838 [2024-11-20 15:07:15.411275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.838 [2024-11-20 15:07:15.411314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.838 [2024-11-20 15:07:15.411356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.838 [2024-11-20 15:07:15.411386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.838 [2024-11-20 15:07:15.411416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.838 [2024-11-20 15:07:15.411445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.838 [2024-11-20 15:07:15.411475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.838 [2024-11-20 15:07:15.411504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.838 [2024-11-20 15:07:15.411533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.838 [2024-11-20 15:07:15.411563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.838 [2024-11-20 15:07:15.411592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:88352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.838 [2024-11-20 15:07:15.411621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:88360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.838 [2024-11-20 15:07:15.411669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.838 [2024-11-20 15:07:15.411707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.838 [2024-11-20 15:07:15.411739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.838 [2024-11-20 15:07:15.411768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.838 [2024-11-20 15:07:15.411798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.838 [2024-11-20 15:07:15.411828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.838 [2024-11-20 15:07:15.411858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.838 [2024-11-20 15:07:15.411888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:88424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.838 [2024-11-20 15:07:15.411923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.838 [2024-11-20 15:07:15.411953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.838 [2024-11-20 15:07:15.411983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.411998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.838 [2024-11-20 15:07:15.412012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.412028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.838 [2024-11-20 15:07:15.412041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.412057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.838 [2024-11-20 15:07:15.412071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.412087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.838 [2024-11-20 15:07:15.412107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.838 [2024-11-20 15:07:15.412123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.839 [2024-11-20 15:07:15.412137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.839 [2024-11-20 15:07:15.412167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.839 [2024-11-20 15:07:15.412196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.839 [2024-11-20 15:07:15.412226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.839 [2024-11-20 15:07:15.412257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.839 [2024-11-20 15:07:15.412287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.839 [2024-11-20 15:07:15.412317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:88472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.839 [2024-11-20 15:07:15.412348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.839 [2024-11-20 15:07:15.412377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.839 [2024-11-20 15:07:15.412406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.839 [2024-11-20 15:07:15.412436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:88504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.839 [2024-11-20 15:07:15.412466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:88512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.839 [2024-11-20 15:07:15.412515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.839 [2024-11-20 15:07:15.412545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:88528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.839 [2024-11-20 15:07:15.412574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:88536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.839 [2024-11-20 15:07:15.412604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.839 [2024-11-20 15:07:15.412635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:88552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.839 [2024-11-20 15:07:15.412679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:88560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.839 [2024-11-20 15:07:15.412708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.839 [2024-11-20 15:07:15.412739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.839 [2024-11-20 15:07:15.412769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.839 [2024-11-20 15:07:15.412799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.839 [2024-11-20 15:07:15.412828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.839 [2024-11-20 15:07:15.412858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.839 [2024-11-20 15:07:15.412896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.839 [2024-11-20 15:07:15.412926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.839 [2024-11-20 15:07:15.412956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.412972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.839 [2024-11-20 15:07:15.412986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.413002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:88040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.839 [2024-11-20 15:07:15.413016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.413031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.839 [2024-11-20 15:07:15.413045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.413061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:88080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.839 [2024-11-20 15:07:15.413075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.413090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.839 [2024-11-20 15:07:15.413104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.839 [2024-11-20 15:07:15.413120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.839 [2024-11-20 15:07:15.413133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:15.413150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:88176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.840 [2024-11-20 15:07:15.413164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:15.413179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:88624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.840 [2024-11-20 15:07:15.413193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:15.413209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:88632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.840 [2024-11-20 15:07:15.413223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:15.413238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.840 [2024-11-20 15:07:15.413252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:15.413274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:88648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.840 [2024-11-20 15:07:15.413289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:15.413305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.840 [2024-11-20 15:07:15.413319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:15.413335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:88664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.840 [2024-11-20 15:07:15.413349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:15.413365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:88672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.840 [2024-11-20 15:07:15.413379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:15.413395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:88680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.840 [2024-11-20 15:07:15.413408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:15.413425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:88688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.840 [2024-11-20 15:07:15.413439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:15.413455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.840 [2024-11-20 15:07:15.413468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:15.413484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.840 [2024-11-20 15:07:15.413498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:15.413513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.840 [2024-11-20 15:07:15.413527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:15.413543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:88720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.840 [2024-11-20 15:07:15.413557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:15.413573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.840 [2024-11-20 15:07:15.413586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:15.413602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:88736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.840 [2024-11-20 15:07:15.413616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:15.413632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.840 [2024-11-20 15:07:15.413658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:15.413682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.840 [2024-11-20 15:07:15.413697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:15.413713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.840 [2024-11-20 15:07:15.413727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:15.413743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.840 [2024-11-20 15:07:15.413757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:15.413772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:88184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.840 [2024-11-20 15:07:15.413786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:15.413801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bfc00 is same with the state(5) to be set 00:16:55.840 [2024-11-20 15:07:15.413819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:55.840 [2024-11-20 15:07:15.413829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:55.840 [2024-11-20 15:07:15.413841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88192 len:8 PRP1 0x0 PRP2 0x0 00:16:55.840 [2024-11-20 15:07:15.413855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:15.413903] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14bfc00 was disconnected and freed. reset controller. 00:16:55.840 [2024-11-20 15:07:15.413922] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:16:55.840 [2024-11-20 15:07:15.413937] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:55.840 [2024-11-20 15:07:15.416373] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:55.840 [2024-11-20 15:07:15.416419] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0d40 (9): Bad file descriptor 00:16:55.840 [2024-11-20 15:07:15.447548] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:55.840 [2024-11-20 15:07:20.035786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.840 [2024-11-20 15:07:20.035862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:20.035885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.840 [2024-11-20 15:07:20.035900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:20.035914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.840 [2024-11-20 15:07:20.035928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:20.035942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:55.840 [2024-11-20 15:07:20.035956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:20.035994] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0d40 is same with the state(5) to be set 00:16:55.840 [2024-11-20 15:07:20.036124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:33760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.840 [2024-11-20 15:07:20.036150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.840 [2024-11-20 15:07:20.036174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:33776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.840 [2024-11-20 15:07:20.036190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.036206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:33800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.841 [2024-11-20 15:07:20.036220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.036237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.841 [2024-11-20 15:07:20.036250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.036266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.841 [2024-11-20 15:07:20.036280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.036296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:33824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.841 [2024-11-20 15:07:20.036310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.036326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.841 [2024-11-20 15:07:20.036339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.036364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:33904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.841 [2024-11-20 15:07:20.036378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.036393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.841 [2024-11-20 15:07:20.036407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.036422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:34432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.841 [2024-11-20 15:07:20.036436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.036451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:34448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.841 [2024-11-20 15:07:20.036465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.036480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:34472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.841 [2024-11-20 15:07:20.036494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.036521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:34480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.841 [2024-11-20 15:07:20.036537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.036553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.841 [2024-11-20 15:07:20.036567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.036583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.841 [2024-11-20 15:07:20.036596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.036612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:34512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.841 [2024-11-20 15:07:20.036625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.036655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.841 [2024-11-20 15:07:20.036672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.036688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.841 [2024-11-20 15:07:20.036702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.036717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.841 [2024-11-20 15:07:20.036731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.036747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.841 [2024-11-20 15:07:20.036761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.036777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:34552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.841 [2024-11-20 15:07:20.036790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.036807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.841 [2024-11-20 15:07:20.036820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.036836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:33928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.841 [2024-11-20 15:07:20.036849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.036865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:33944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.841 [2024-11-20 15:07:20.036878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.036894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:33968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.841 [2024-11-20 15:07:20.036915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.036932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:33984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.841 [2024-11-20 15:07:20.036945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.036961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.841 [2024-11-20 15:07:20.036974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.036990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:34024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.841 [2024-11-20 15:07:20.037004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.037019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:34040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.841 [2024-11-20 15:07:20.037033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.037049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.841 [2024-11-20 15:07:20.037063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.037078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.841 [2024-11-20 15:07:20.037092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.037107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:34576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.841 [2024-11-20 15:07:20.037121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.037137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:34584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.841 [2024-11-20 15:07:20.037150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.037165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:34592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.841 [2024-11-20 15:07:20.037179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.841 [2024-11-20 15:07:20.037194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.842 [2024-11-20 15:07:20.037208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.037224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:34608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.842 [2024-11-20 15:07:20.037238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.037254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:34616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.842 [2024-11-20 15:07:20.037268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.037291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.842 [2024-11-20 15:07:20.037306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.037321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.842 [2024-11-20 15:07:20.037334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.037350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.842 [2024-11-20 15:07:20.037363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.037379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:34648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.842 [2024-11-20 15:07:20.037392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.037408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:34656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.842 [2024-11-20 15:07:20.037421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.037437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.842 [2024-11-20 15:07:20.037450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.037466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.842 [2024-11-20 15:07:20.037479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.037494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:34680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.842 [2024-11-20 15:07:20.037508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.037534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.842 [2024-11-20 15:07:20.037547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.037563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:34696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.842 [2024-11-20 15:07:20.037577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.037592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.842 [2024-11-20 15:07:20.037606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.037622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:34056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.842 [2024-11-20 15:07:20.037635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.037665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:34064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.842 [2024-11-20 15:07:20.037679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.037702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:34072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.842 [2024-11-20 15:07:20.037716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.037732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:34080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.842 [2024-11-20 15:07:20.037746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.037762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:34088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.842 [2024-11-20 15:07:20.037775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.037791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.842 [2024-11-20 15:07:20.037805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.037820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.842 [2024-11-20 15:07:20.037835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.037851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:34704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.842 [2024-11-20 15:07:20.037864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.037880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.842 [2024-11-20 15:07:20.037893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.037909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:34720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.842 [2024-11-20 15:07:20.037923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.037939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.842 [2024-11-20 15:07:20.037952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.037967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.842 [2024-11-20 15:07:20.037981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.037997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.842 [2024-11-20 15:07:20.038011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.038027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:34752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.842 [2024-11-20 15:07:20.038040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.038056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.842 [2024-11-20 15:07:20.038076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.842 [2024-11-20 15:07:20.038092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:34768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.842 [2024-11-20 15:07:20.038106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.843 [2024-11-20 15:07:20.038135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.843 [2024-11-20 15:07:20.038164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:34168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.843 [2024-11-20 15:07:20.038193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:34184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.843 [2024-11-20 15:07:20.038222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.843 [2024-11-20 15:07:20.038252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.843 [2024-11-20 15:07:20.038282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:34240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.843 [2024-11-20 15:07:20.038311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.843 [2024-11-20 15:07:20.038340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.843 [2024-11-20 15:07:20.038369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.843 [2024-11-20 15:07:20.038399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.843 [2024-11-20 15:07:20.038429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.843 [2024-11-20 15:07:20.038465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.843 [2024-11-20 15:07:20.038495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.843 [2024-11-20 15:07:20.038524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.843 [2024-11-20 15:07:20.038554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.843 [2024-11-20 15:07:20.038583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.843 [2024-11-20 15:07:20.038613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:34848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.843 [2024-11-20 15:07:20.038654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.843 [2024-11-20 15:07:20.038686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.843 [2024-11-20 15:07:20.038715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.843 [2024-11-20 15:07:20.038745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.843 [2024-11-20 15:07:20.038775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.843 [2024-11-20 15:07:20.038804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:34896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.843 [2024-11-20 15:07:20.038844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.843 [2024-11-20 15:07:20.038876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.843 [2024-11-20 15:07:20.038905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:34920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.843 [2024-11-20 15:07:20.038935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.843 [2024-11-20 15:07:20.038964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.038980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:34936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.843 [2024-11-20 15:07:20.038994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.039009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.843 [2024-11-20 15:07:20.039023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.039038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.843 [2024-11-20 15:07:20.039052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.843 [2024-11-20 15:07:20.039068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.844 [2024-11-20 15:07:20.039082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.844 [2024-11-20 15:07:20.039119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:34296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.844 [2024-11-20 15:07:20.039161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.844 [2024-11-20 15:07:20.039192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:34328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.844 [2024-11-20 15:07:20.039221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:34336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.844 [2024-11-20 15:07:20.039259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:34344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.844 [2024-11-20 15:07:20.039290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:34360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.844 [2024-11-20 15:07:20.039320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:34376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.844 [2024-11-20 15:07:20.039349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.844 [2024-11-20 15:07:20.039378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.844 [2024-11-20 15:07:20.039407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:34984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.844 [2024-11-20 15:07:20.039437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.844 [2024-11-20 15:07:20.039466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:35000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.844 [2024-11-20 15:07:20.039495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.844 [2024-11-20 15:07:20.039524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.844 [2024-11-20 15:07:20.039554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.844 [2024-11-20 15:07:20.039583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:35032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.844 [2024-11-20 15:07:20.039612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.844 [2024-11-20 15:07:20.039671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.844 [2024-11-20 15:07:20.039702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.844 [2024-11-20 15:07:20.039732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.844 [2024-11-20 15:07:20.039761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.844 [2024-11-20 15:07:20.039790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:35080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.844 [2024-11-20 15:07:20.039819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:55.844 [2024-11-20 15:07:20.039848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:34392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.844 [2024-11-20 15:07:20.039878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.844 [2024-11-20 15:07:20.039907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.844 [2024-11-20 15:07:20.039936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:34424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.844 [2024-11-20 15:07:20.039965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.039980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:34440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.844 [2024-11-20 15:07:20.039994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.040009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:34456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.844 [2024-11-20 15:07:20.040029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.040047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:55.844 [2024-11-20 15:07:20.040060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.844 [2024-11-20 15:07:20.040075] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a3970 is same with the state(5) to be set 00:16:55.845 [2024-11-20 15:07:20.040091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:55.845 [2024-11-20 15:07:20.040102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:55.845 [2024-11-20 15:07:20.040113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34496 len:8 PRP1 0x0 PRP2 0x0 00:16:55.845 [2024-11-20 15:07:20.040126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.845 [2024-11-20 15:07:20.040175] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14a3970 was disconnected and freed. reset controller. 00:16:55.845 [2024-11-20 15:07:20.040201] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:16:55.845 [2024-11-20 15:07:20.040216] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:55.845 [2024-11-20 15:07:20.042761] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:55.845 [2024-11-20 15:07:20.042802] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a0d40 (9): Bad file descriptor 00:16:55.845 [2024-11-20 15:07:20.078480] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:55.845 00:16:55.845 Latency(us) 00:16:55.845 [2024-11-20T15:07:26.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.845 [2024-11-20T15:07:26.649Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:55.845 Verification LBA range: start 0x0 length 0x4000 00:16:55.845 NVMe0n1 : 15.01 12343.53 48.22 318.57 0.00 10088.69 551.10 43849.54 00:16:55.845 [2024-11-20T15:07:26.649Z] =================================================================================================================== 00:16:55.845 [2024-11-20T15:07:26.649Z] Total : 12343.53 48.22 318.57 0.00 10088.69 551.10 43849.54 00:16:55.845 Received shutdown signal, test time was about 15.000000 seconds 00:16:55.845 00:16:55.845 Latency(us) 00:16:55.845 [2024-11-20T15:07:26.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:55.845 [2024-11-20T15:07:26.649Z] =================================================================================================================== 00:16:55.845 [2024-11-20T15:07:26.649Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:55.845 15:07:25 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:55.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:55.845 15:07:25 -- host/failover.sh@65 -- # count=3 00:16:55.845 15:07:25 -- host/failover.sh@67 -- # (( count != 3 )) 00:16:55.845 15:07:25 -- host/failover.sh@73 -- # bdevperf_pid=82160 00:16:55.845 15:07:25 -- host/failover.sh@75 -- # waitforlisten 82160 /var/tmp/bdevperf.sock 00:16:55.845 15:07:25 -- common/autotest_common.sh@829 -- # '[' -z 82160 ']' 00:16:55.845 15:07:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:55.845 15:07:25 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:55.845 15:07:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:55.845 15:07:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:55.845 15:07:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:55.845 15:07:25 -- common/autotest_common.sh@10 -- # set +x 00:16:55.845 15:07:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:55.845 15:07:26 -- common/autotest_common.sh@862 -- # return 0 00:16:55.845 15:07:26 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:55.845 [2024-11-20 15:07:26.343952] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:55.845 15:07:26 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:56.104 [2024-11-20 15:07:26.648249] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:56.104 15:07:26 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:56.362 NVMe0n1 00:16:56.362 15:07:27 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:56.620 00:16:56.621 15:07:27 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:57.186 00:16:57.186 15:07:27 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:57.186 15:07:27 -- host/failover.sh@82 -- # grep -q NVMe0 00:16:57.445 15:07:28 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:57.703 15:07:28 -- host/failover.sh@87 -- # sleep 3 00:17:00.990 15:07:31 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:00.990 15:07:31 -- host/failover.sh@88 -- # grep -q NVMe0 00:17:00.990 15:07:31 -- host/failover.sh@90 -- # run_test_pid=82231 00:17:00.990 15:07:31 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:00.990 15:07:31 -- host/failover.sh@92 -- # wait 82231 00:17:02.364 0 00:17:02.364 15:07:32 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:02.364 [2024-11-20 15:07:25.837136] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:02.364 [2024-11-20 15:07:25.837234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82160 ] 00:17:02.364 [2024-11-20 15:07:25.973625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.364 [2024-11-20 15:07:26.008201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.364 [2024-11-20 15:07:28.363349] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:17:02.364 [2024-11-20 15:07:28.363500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:02.364 [2024-11-20 15:07:28.363528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:02.364 [2024-11-20 15:07:28.363547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:02.364 [2024-11-20 15:07:28.363561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:02.364 [2024-11-20 15:07:28.363576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:02.364 [2024-11-20 15:07:28.363590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:02.364 [2024-11-20 15:07:28.363605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:02.364 [2024-11-20 15:07:28.363619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:02.364 [2024-11-20 15:07:28.363633] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:02.364 [2024-11-20 15:07:28.363699] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:02.364 [2024-11-20 15:07:28.363732] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18bbd40 (9): Bad file descriptor 00:17:02.364 [2024-11-20 15:07:28.375232] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:02.365 Running I/O for 1 seconds... 00:17:02.365 00:17:02.365 Latency(us) 00:17:02.365 [2024-11-20T15:07:33.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.365 [2024-11-20T15:07:33.169Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:02.365 Verification LBA range: start 0x0 length 0x4000 00:17:02.365 NVMe0n1 : 1.01 12678.79 49.53 0.00 0.00 10039.18 923.46 13166.78 00:17:02.365 [2024-11-20T15:07:33.169Z] =================================================================================================================== 00:17:02.365 [2024-11-20T15:07:33.169Z] Total : 12678.79 49.53 0.00 0.00 10039.18 923.46 13166.78 00:17:02.365 15:07:32 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:02.365 15:07:32 -- host/failover.sh@95 -- # grep -q NVMe0 00:17:02.365 15:07:33 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:02.623 15:07:33 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:02.623 15:07:33 -- host/failover.sh@99 -- # grep -q NVMe0 00:17:03.189 15:07:33 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:03.447 15:07:34 -- host/failover.sh@101 -- # sleep 3 00:17:06.728 15:07:37 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:06.728 15:07:37 -- host/failover.sh@103 -- # grep -q NVMe0 00:17:06.728 15:07:37 -- host/failover.sh@108 -- # killprocess 82160 00:17:06.728 15:07:37 -- common/autotest_common.sh@936 -- # '[' -z 82160 ']' 00:17:06.728 15:07:37 -- common/autotest_common.sh@940 -- # kill -0 82160 00:17:06.728 15:07:37 -- common/autotest_common.sh@941 -- # uname 00:17:06.728 15:07:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:06.728 15:07:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82160 00:17:06.728 15:07:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:06.728 15:07:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:06.728 killing process with pid 82160 00:17:06.728 15:07:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82160' 00:17:06.728 15:07:37 -- common/autotest_common.sh@955 -- # kill 82160 00:17:06.728 15:07:37 -- common/autotest_common.sh@960 -- # wait 82160 00:17:06.728 15:07:37 -- host/failover.sh@110 -- # sync 00:17:06.985 15:07:37 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:07.243 15:07:37 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:17:07.243 15:07:37 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:07.243 15:07:37 -- host/failover.sh@116 -- # nvmftestfini 00:17:07.243 15:07:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:07.243 15:07:37 -- nvmf/common.sh@116 -- # sync 00:17:07.243 15:07:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:07.243 15:07:37 -- nvmf/common.sh@119 -- # set +e 00:17:07.243 15:07:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:07.243 15:07:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:07.243 rmmod nvme_tcp 00:17:07.243 rmmod nvme_fabrics 00:17:07.243 rmmod nvme_keyring 00:17:07.243 15:07:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:07.243 15:07:37 -- nvmf/common.sh@123 -- # set -e 00:17:07.243 15:07:37 -- nvmf/common.sh@124 -- # return 0 00:17:07.243 15:07:37 -- nvmf/common.sh@477 -- # '[' -n 81904 ']' 00:17:07.243 15:07:37 -- nvmf/common.sh@478 -- # killprocess 81904 00:17:07.243 15:07:37 -- common/autotest_common.sh@936 -- # '[' -z 81904 ']' 00:17:07.243 15:07:37 -- common/autotest_common.sh@940 -- # kill -0 81904 00:17:07.243 15:07:37 -- common/autotest_common.sh@941 -- # uname 00:17:07.243 15:07:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:07.243 15:07:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81904 00:17:07.243 15:07:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:07.243 15:07:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:07.243 killing process with pid 81904 00:17:07.243 15:07:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81904' 00:17:07.243 15:07:37 -- common/autotest_common.sh@955 -- # kill 81904 00:17:07.243 15:07:37 -- common/autotest_common.sh@960 -- # wait 81904 00:17:07.501 15:07:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:07.501 15:07:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:07.501 15:07:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:07.501 15:07:38 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:07.501 15:07:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:07.501 15:07:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.501 15:07:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.501 15:07:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.501 15:07:38 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:07.501 00:17:07.501 real 0m32.304s 00:17:07.501 user 2m5.642s 00:17:07.501 sys 0m5.405s 00:17:07.501 15:07:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:07.501 15:07:38 -- common/autotest_common.sh@10 -- # set +x 00:17:07.501 ************************************ 00:17:07.501 END TEST nvmf_failover 00:17:07.501 ************************************ 00:17:07.501 15:07:38 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:07.501 15:07:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:07.501 15:07:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:07.501 15:07:38 -- common/autotest_common.sh@10 -- # set +x 00:17:07.501 ************************************ 00:17:07.501 START TEST nvmf_discovery 00:17:07.501 ************************************ 00:17:07.501 15:07:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:17:07.501 * Looking for test storage... 00:17:07.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:07.501 15:07:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:07.501 15:07:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:07.501 15:07:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:07.759 15:07:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:07.759 15:07:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:07.759 15:07:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:07.759 15:07:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:07.759 15:07:38 -- scripts/common.sh@335 -- # IFS=.-: 00:17:07.759 15:07:38 -- scripts/common.sh@335 -- # read -ra ver1 00:17:07.759 15:07:38 -- scripts/common.sh@336 -- # IFS=.-: 00:17:07.759 15:07:38 -- scripts/common.sh@336 -- # read -ra ver2 00:17:07.759 15:07:38 -- scripts/common.sh@337 -- # local 'op=<' 00:17:07.759 15:07:38 -- scripts/common.sh@339 -- # ver1_l=2 00:17:07.759 15:07:38 -- scripts/common.sh@340 -- # ver2_l=1 00:17:07.759 15:07:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:07.759 15:07:38 -- scripts/common.sh@343 -- # case "$op" in 00:17:07.759 15:07:38 -- scripts/common.sh@344 -- # : 1 00:17:07.759 15:07:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:07.759 15:07:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:07.759 15:07:38 -- scripts/common.sh@364 -- # decimal 1 00:17:07.759 15:07:38 -- scripts/common.sh@352 -- # local d=1 00:17:07.759 15:07:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:07.759 15:07:38 -- scripts/common.sh@354 -- # echo 1 00:17:07.759 15:07:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:07.759 15:07:38 -- scripts/common.sh@365 -- # decimal 2 00:17:07.759 15:07:38 -- scripts/common.sh@352 -- # local d=2 00:17:07.759 15:07:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:07.759 15:07:38 -- scripts/common.sh@354 -- # echo 2 00:17:07.759 15:07:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:07.759 15:07:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:07.759 15:07:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:07.759 15:07:38 -- scripts/common.sh@367 -- # return 0 00:17:07.759 15:07:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:07.759 15:07:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:07.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.759 --rc genhtml_branch_coverage=1 00:17:07.759 --rc genhtml_function_coverage=1 00:17:07.759 --rc genhtml_legend=1 00:17:07.759 --rc geninfo_all_blocks=1 00:17:07.759 --rc geninfo_unexecuted_blocks=1 00:17:07.759 00:17:07.759 ' 00:17:07.759 15:07:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:07.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.759 --rc genhtml_branch_coverage=1 00:17:07.759 --rc genhtml_function_coverage=1 00:17:07.759 --rc genhtml_legend=1 00:17:07.759 --rc geninfo_all_blocks=1 00:17:07.759 --rc geninfo_unexecuted_blocks=1 00:17:07.759 00:17:07.759 ' 00:17:07.759 15:07:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:07.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.759 --rc genhtml_branch_coverage=1 00:17:07.759 --rc genhtml_function_coverage=1 00:17:07.759 --rc genhtml_legend=1 00:17:07.759 --rc geninfo_all_blocks=1 00:17:07.759 --rc geninfo_unexecuted_blocks=1 00:17:07.759 00:17:07.759 ' 00:17:07.759 15:07:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:07.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.759 --rc genhtml_branch_coverage=1 00:17:07.759 --rc genhtml_function_coverage=1 00:17:07.759 --rc genhtml_legend=1 00:17:07.759 --rc geninfo_all_blocks=1 00:17:07.759 --rc geninfo_unexecuted_blocks=1 00:17:07.759 00:17:07.759 ' 00:17:07.759 15:07:38 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:07.759 15:07:38 -- nvmf/common.sh@7 -- # uname -s 00:17:07.759 15:07:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.759 15:07:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.759 15:07:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.759 15:07:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.759 15:07:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.759 15:07:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.759 15:07:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.759 15:07:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.759 15:07:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.759 15:07:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.759 15:07:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:17:07.759 15:07:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:17:07.759 15:07:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.759 15:07:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.759 15:07:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:07.759 15:07:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:07.759 15:07:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.759 15:07:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.759 15:07:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.759 15:07:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.759 15:07:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.760 15:07:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.760 15:07:38 -- paths/export.sh@5 -- # export PATH 00:17:07.760 15:07:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.760 15:07:38 -- nvmf/common.sh@46 -- # : 0 00:17:07.760 15:07:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:07.760 15:07:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:07.760 15:07:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:07.760 15:07:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.760 15:07:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.760 15:07:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:07.760 15:07:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:07.760 15:07:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:07.760 15:07:38 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:17:07.760 15:07:38 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:17:07.760 15:07:38 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:17:07.760 15:07:38 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:17:07.760 15:07:38 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:17:07.760 15:07:38 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:17:07.760 15:07:38 -- host/discovery.sh@25 -- # nvmftestinit 00:17:07.760 15:07:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:07.760 15:07:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.760 15:07:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:07.760 15:07:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:07.760 15:07:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:07.760 15:07:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.760 15:07:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.760 15:07:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.760 15:07:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:07.760 15:07:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:07.760 15:07:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:07.760 15:07:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:07.760 15:07:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:07.760 15:07:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:07.760 15:07:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:07.760 15:07:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:07.760 15:07:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:07.760 15:07:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:07.760 15:07:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:07.760 15:07:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:07.760 15:07:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:07.760 15:07:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:07.760 15:07:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:07.760 15:07:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:07.760 15:07:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:07.760 15:07:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:07.760 15:07:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:07.760 15:07:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:07.760 Cannot find device "nvmf_tgt_br" 00:17:07.760 15:07:38 -- nvmf/common.sh@154 -- # true 00:17:07.760 15:07:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:07.760 Cannot find device "nvmf_tgt_br2" 00:17:07.760 15:07:38 -- nvmf/common.sh@155 -- # true 00:17:07.760 15:07:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:07.760 15:07:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:07.760 Cannot find device "nvmf_tgt_br" 00:17:07.760 15:07:38 -- nvmf/common.sh@157 -- # true 00:17:07.760 15:07:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:07.760 Cannot find device "nvmf_tgt_br2" 00:17:07.760 15:07:38 -- nvmf/common.sh@158 -- # true 00:17:07.760 15:07:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:07.760 15:07:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:07.760 15:07:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:07.760 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:07.760 15:07:38 -- nvmf/common.sh@161 -- # true 00:17:07.760 15:07:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:07.760 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:07.760 15:07:38 -- nvmf/common.sh@162 -- # true 00:17:07.760 15:07:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:07.760 15:07:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:07.760 15:07:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:07.760 15:07:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:07.760 15:07:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:07.760 15:07:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:07.760 15:07:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:07.760 15:07:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:07.760 15:07:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:07.760 15:07:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:07.760 15:07:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:08.018 15:07:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:08.018 15:07:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:08.018 15:07:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:08.018 15:07:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:08.018 15:07:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:08.018 15:07:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:08.018 15:07:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:08.018 15:07:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:08.018 15:07:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:08.018 15:07:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:08.018 15:07:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:08.018 15:07:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:08.018 15:07:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:08.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:17:08.018 00:17:08.018 --- 10.0.0.2 ping statistics --- 00:17:08.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.018 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:08.018 15:07:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:08.018 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:08.018 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:17:08.018 00:17:08.018 --- 10.0.0.3 ping statistics --- 00:17:08.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.018 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:17:08.018 15:07:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:08.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:17:08.018 00:17:08.018 --- 10.0.0.1 ping statistics --- 00:17:08.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.018 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:08.018 15:07:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.018 15:07:38 -- nvmf/common.sh@421 -- # return 0 00:17:08.018 15:07:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:08.018 15:07:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.018 15:07:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:08.018 15:07:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:08.018 15:07:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.018 15:07:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:08.018 15:07:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:08.018 15:07:38 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:17:08.018 15:07:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:08.018 15:07:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:08.018 15:07:38 -- common/autotest_common.sh@10 -- # set +x 00:17:08.018 15:07:38 -- nvmf/common.sh@469 -- # nvmfpid=82518 00:17:08.018 15:07:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:08.018 15:07:38 -- nvmf/common.sh@470 -- # waitforlisten 82518 00:17:08.018 15:07:38 -- common/autotest_common.sh@829 -- # '[' -z 82518 ']' 00:17:08.018 15:07:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.018 15:07:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:08.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.018 15:07:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.018 15:07:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:08.018 15:07:38 -- common/autotest_common.sh@10 -- # set +x 00:17:08.018 [2024-11-20 15:07:38.751275] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:08.019 [2024-11-20 15:07:38.751715] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.278 [2024-11-20 15:07:38.892586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.278 [2024-11-20 15:07:38.927677] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:08.278 [2024-11-20 15:07:38.927893] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.278 [2024-11-20 15:07:38.927917] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.278 [2024-11-20 15:07:38.927932] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.278 [2024-11-20 15:07:38.927966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.278 15:07:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:08.278 15:07:39 -- common/autotest_common.sh@862 -- # return 0 00:17:08.278 15:07:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:08.278 15:07:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:08.278 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:17:08.538 15:07:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.538 15:07:39 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:08.538 15:07:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.538 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:17:08.538 [2024-11-20 15:07:39.093050] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.538 15:07:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.538 15:07:39 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:17:08.538 15:07:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.538 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:17:08.538 [2024-11-20 15:07:39.101165] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:08.538 15:07:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.538 15:07:39 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:17:08.538 15:07:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.538 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:17:08.538 null0 00:17:08.538 15:07:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.538 15:07:39 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:17:08.538 15:07:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.538 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:17:08.538 null1 00:17:08.538 15:07:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.538 15:07:39 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:17:08.538 15:07:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.538 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:17:08.538 15:07:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.538 15:07:39 -- host/discovery.sh@45 -- # hostpid=82543 00:17:08.538 15:07:39 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:17:08.538 15:07:39 -- host/discovery.sh@46 -- # waitforlisten 82543 /tmp/host.sock 00:17:08.538 15:07:39 -- common/autotest_common.sh@829 -- # '[' -z 82543 ']' 00:17:08.538 15:07:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:17:08.538 15:07:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:08.538 15:07:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:08.538 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:08.538 15:07:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:08.538 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:17:08.538 [2024-11-20 15:07:39.173812] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:08.538 [2024-11-20 15:07:39.174303] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82543 ] 00:17:08.538 [2024-11-20 15:07:39.309411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.798 [2024-11-20 15:07:39.347884] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:08.798 [2024-11-20 15:07:39.348075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.798 15:07:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:08.798 15:07:39 -- common/autotest_common.sh@862 -- # return 0 00:17:08.798 15:07:39 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:08.798 15:07:39 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:17:08.798 15:07:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.798 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:17:08.798 15:07:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.798 15:07:39 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:17:08.798 15:07:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.798 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:17:08.798 15:07:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.798 15:07:39 -- host/discovery.sh@72 -- # notify_id=0 00:17:08.798 15:07:39 -- host/discovery.sh@78 -- # get_subsystem_names 00:17:08.798 15:07:39 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:08.798 15:07:39 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:08.798 15:07:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.798 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:17:08.798 15:07:39 -- host/discovery.sh@59 -- # xargs 00:17:08.798 15:07:39 -- host/discovery.sh@59 -- # sort 00:17:08.798 15:07:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.798 15:07:39 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:17:08.798 15:07:39 -- host/discovery.sh@79 -- # get_bdev_list 00:17:08.798 15:07:39 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:08.798 15:07:39 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:08.798 15:07:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.798 15:07:39 -- host/discovery.sh@55 -- # sort 00:17:08.798 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:17:08.798 15:07:39 -- host/discovery.sh@55 -- # xargs 00:17:08.798 15:07:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.798 15:07:39 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:17:08.798 15:07:39 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:17:08.798 15:07:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.798 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:17:08.798 15:07:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.798 15:07:39 -- host/discovery.sh@82 -- # get_subsystem_names 00:17:08.798 15:07:39 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:08.798 15:07:39 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:08.798 15:07:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.798 15:07:39 -- host/discovery.sh@59 -- # sort 00:17:08.798 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:17:08.798 15:07:39 -- host/discovery.sh@59 -- # xargs 00:17:08.798 15:07:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.057 15:07:39 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:17:09.057 15:07:39 -- host/discovery.sh@83 -- # get_bdev_list 00:17:09.057 15:07:39 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:09.057 15:07:39 -- host/discovery.sh@55 -- # xargs 00:17:09.057 15:07:39 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:09.057 15:07:39 -- host/discovery.sh@55 -- # sort 00:17:09.057 15:07:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.057 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:17:09.057 15:07:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.057 15:07:39 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:17:09.057 15:07:39 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:17:09.057 15:07:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.057 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:17:09.057 15:07:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.057 15:07:39 -- host/discovery.sh@86 -- # get_subsystem_names 00:17:09.057 15:07:39 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:09.057 15:07:39 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:09.057 15:07:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.057 15:07:39 -- host/discovery.sh@59 -- # sort 00:17:09.057 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:17:09.057 15:07:39 -- host/discovery.sh@59 -- # xargs 00:17:09.057 15:07:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.057 15:07:39 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:17:09.057 15:07:39 -- host/discovery.sh@87 -- # get_bdev_list 00:17:09.057 15:07:39 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:09.057 15:07:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.057 15:07:39 -- host/discovery.sh@55 -- # xargs 00:17:09.057 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:17:09.057 15:07:39 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:09.057 15:07:39 -- host/discovery.sh@55 -- # sort 00:17:09.057 15:07:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.057 15:07:39 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:17:09.057 15:07:39 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:09.057 15:07:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.057 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:17:09.057 [2024-11-20 15:07:39.785434] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.057 15:07:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.057 15:07:39 -- host/discovery.sh@92 -- # get_subsystem_names 00:17:09.057 15:07:39 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:09.057 15:07:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.057 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:17:09.057 15:07:39 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:09.057 15:07:39 -- host/discovery.sh@59 -- # sort 00:17:09.057 15:07:39 -- host/discovery.sh@59 -- # xargs 00:17:09.057 15:07:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.057 15:07:39 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:17:09.057 15:07:39 -- host/discovery.sh@93 -- # get_bdev_list 00:17:09.057 15:07:39 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:09.057 15:07:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.057 15:07:39 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:09.057 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:17:09.057 15:07:39 -- host/discovery.sh@55 -- # sort 00:17:09.057 15:07:39 -- host/discovery.sh@55 -- # xargs 00:17:09.057 15:07:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.316 15:07:39 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:17:09.316 15:07:39 -- host/discovery.sh@94 -- # get_notification_count 00:17:09.316 15:07:39 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:09.316 15:07:39 -- host/discovery.sh@74 -- # jq '. | length' 00:17:09.316 15:07:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.316 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:17:09.316 15:07:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.316 15:07:39 -- host/discovery.sh@74 -- # notification_count=0 00:17:09.316 15:07:39 -- host/discovery.sh@75 -- # notify_id=0 00:17:09.316 15:07:39 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:17:09.316 15:07:39 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:17:09.316 15:07:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.316 15:07:39 -- common/autotest_common.sh@10 -- # set +x 00:17:09.316 15:07:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.316 15:07:39 -- host/discovery.sh@100 -- # sleep 1 00:17:09.885 [2024-11-20 15:07:40.455998] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:09.885 [2024-11-20 15:07:40.456053] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:09.885 [2024-11-20 15:07:40.456076] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:09.885 [2024-11-20 15:07:40.462062] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:17:09.885 [2024-11-20 15:07:40.518108] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:09.885 [2024-11-20 15:07:40.518148] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:10.452 15:07:40 -- host/discovery.sh@101 -- # get_subsystem_names 00:17:10.452 15:07:40 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:10.452 15:07:40 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:10.452 15:07:40 -- host/discovery.sh@59 -- # xargs 00:17:10.452 15:07:40 -- host/discovery.sh@59 -- # sort 00:17:10.452 15:07:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.452 15:07:40 -- common/autotest_common.sh@10 -- # set +x 00:17:10.452 15:07:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.452 15:07:41 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.452 15:07:41 -- host/discovery.sh@102 -- # get_bdev_list 00:17:10.452 15:07:41 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:10.452 15:07:41 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:10.452 15:07:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.452 15:07:41 -- host/discovery.sh@55 -- # sort 00:17:10.452 15:07:41 -- common/autotest_common.sh@10 -- # set +x 00:17:10.452 15:07:41 -- host/discovery.sh@55 -- # xargs 00:17:10.452 15:07:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.452 15:07:41 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:17:10.452 15:07:41 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:17:10.452 15:07:41 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:10.452 15:07:41 -- host/discovery.sh@63 -- # sort -n 00:17:10.452 15:07:41 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:10.452 15:07:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.452 15:07:41 -- common/autotest_common.sh@10 -- # set +x 00:17:10.452 15:07:41 -- host/discovery.sh@63 -- # xargs 00:17:10.452 15:07:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.452 15:07:41 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:17:10.452 15:07:41 -- host/discovery.sh@104 -- # get_notification_count 00:17:10.452 15:07:41 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:17:10.452 15:07:41 -- host/discovery.sh@74 -- # jq '. | length' 00:17:10.452 15:07:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.452 15:07:41 -- common/autotest_common.sh@10 -- # set +x 00:17:10.452 15:07:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.452 15:07:41 -- host/discovery.sh@74 -- # notification_count=1 00:17:10.452 15:07:41 -- host/discovery.sh@75 -- # notify_id=1 00:17:10.453 15:07:41 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:17:10.453 15:07:41 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:17:10.453 15:07:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.453 15:07:41 -- common/autotest_common.sh@10 -- # set +x 00:17:10.453 15:07:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.453 15:07:41 -- host/discovery.sh@109 -- # sleep 1 00:17:11.834 15:07:42 -- host/discovery.sh@110 -- # get_bdev_list 00:17:11.834 15:07:42 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:11.834 15:07:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.834 15:07:42 -- common/autotest_common.sh@10 -- # set +x 00:17:11.834 15:07:42 -- host/discovery.sh@55 -- # sort 00:17:11.834 15:07:42 -- host/discovery.sh@55 -- # xargs 00:17:11.834 15:07:42 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:11.834 15:07:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.834 15:07:42 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:11.834 15:07:42 -- host/discovery.sh@111 -- # get_notification_count 00:17:11.834 15:07:42 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:17:11.834 15:07:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.834 15:07:42 -- host/discovery.sh@74 -- # jq '. | length' 00:17:11.834 15:07:42 -- common/autotest_common.sh@10 -- # set +x 00:17:11.834 15:07:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.834 15:07:42 -- host/discovery.sh@74 -- # notification_count=1 00:17:11.834 15:07:42 -- host/discovery.sh@75 -- # notify_id=2 00:17:11.834 15:07:42 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:17:11.834 15:07:42 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:17:11.834 15:07:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.834 15:07:42 -- common/autotest_common.sh@10 -- # set +x 00:17:11.834 [2024-11-20 15:07:42.312194] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:11.834 [2024-11-20 15:07:42.313121] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:11.834 [2024-11-20 15:07:42.313163] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:11.834 15:07:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.834 15:07:42 -- host/discovery.sh@117 -- # sleep 1 00:17:11.834 [2024-11-20 15:07:42.319116] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:17:11.834 [2024-11-20 15:07:42.379404] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:11.834 [2024-11-20 15:07:42.379452] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:11.834 [2024-11-20 15:07:42.379461] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:12.769 15:07:43 -- host/discovery.sh@118 -- # get_subsystem_names 00:17:12.769 15:07:43 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:12.769 15:07:43 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:12.769 15:07:43 -- host/discovery.sh@59 -- # sort 00:17:12.769 15:07:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.769 15:07:43 -- common/autotest_common.sh@10 -- # set +x 00:17:12.769 15:07:43 -- host/discovery.sh@59 -- # xargs 00:17:12.769 15:07:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.769 15:07:43 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.769 15:07:43 -- host/discovery.sh@119 -- # get_bdev_list 00:17:12.769 15:07:43 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:12.770 15:07:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.770 15:07:43 -- common/autotest_common.sh@10 -- # set +x 00:17:12.770 15:07:43 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:12.770 15:07:43 -- host/discovery.sh@55 -- # sort 00:17:12.770 15:07:43 -- host/discovery.sh@55 -- # xargs 00:17:12.770 15:07:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.770 15:07:43 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:12.770 15:07:43 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:17:12.770 15:07:43 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:12.770 15:07:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.770 15:07:43 -- common/autotest_common.sh@10 -- # set +x 00:17:12.770 15:07:43 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:12.770 15:07:43 -- host/discovery.sh@63 -- # sort -n 00:17:12.770 15:07:43 -- host/discovery.sh@63 -- # xargs 00:17:12.770 15:07:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.770 15:07:43 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:17:12.770 15:07:43 -- host/discovery.sh@121 -- # get_notification_count 00:17:12.770 15:07:43 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:12.770 15:07:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.770 15:07:43 -- common/autotest_common.sh@10 -- # set +x 00:17:12.770 15:07:43 -- host/discovery.sh@74 -- # jq '. | length' 00:17:12.770 15:07:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.770 15:07:43 -- host/discovery.sh@74 -- # notification_count=0 00:17:12.770 15:07:43 -- host/discovery.sh@75 -- # notify_id=2 00:17:12.770 15:07:43 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:17:12.770 15:07:43 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:12.770 15:07:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.770 15:07:43 -- common/autotest_common.sh@10 -- # set +x 00:17:12.770 [2024-11-20 15:07:43.546530] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:17:12.770 [2024-11-20 15:07:43.546573] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:12.770 15:07:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.770 15:07:43 -- host/discovery.sh@127 -- # sleep 1 00:17:12.770 [2024-11-20 15:07:43.552522] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:17:12.770 [2024-11-20 15:07:43.552559] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:12.770 [2024-11-20 15:07:43.552686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.770 [2024-11-20 15:07:43.552721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.770 [2024-11-20 15:07:43.552735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.770 [2024-11-20 15:07:43.552746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.770 [2024-11-20 15:07:43.552755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.770 [2024-11-20 15:07:43.552764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.770 [2024-11-20 15:07:43.552774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:12.770 [2024-11-20 15:07:43.552784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.770 [2024-11-20 15:07:43.552793] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee1150 is same with the state(5) to be set 00:17:14.143 15:07:44 -- host/discovery.sh@128 -- # get_subsystem_names 00:17:14.143 15:07:44 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:14.143 15:07:44 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:14.143 15:07:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.143 15:07:44 -- common/autotest_common.sh@10 -- # set +x 00:17:14.143 15:07:44 -- host/discovery.sh@59 -- # xargs 00:17:14.143 15:07:44 -- host/discovery.sh@59 -- # sort 00:17:14.143 15:07:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.144 15:07:44 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.144 15:07:44 -- host/discovery.sh@129 -- # get_bdev_list 00:17:14.144 15:07:44 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:14.144 15:07:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.144 15:07:44 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:14.144 15:07:44 -- common/autotest_common.sh@10 -- # set +x 00:17:14.144 15:07:44 -- host/discovery.sh@55 -- # xargs 00:17:14.144 15:07:44 -- host/discovery.sh@55 -- # sort 00:17:14.144 15:07:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.144 15:07:44 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:14.144 15:07:44 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:17:14.144 15:07:44 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:17:14.144 15:07:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.144 15:07:44 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:17:14.144 15:07:44 -- common/autotest_common.sh@10 -- # set +x 00:17:14.144 15:07:44 -- host/discovery.sh@63 -- # xargs 00:17:14.144 15:07:44 -- host/discovery.sh@63 -- # sort -n 00:17:14.144 15:07:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.144 15:07:44 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:17:14.144 15:07:44 -- host/discovery.sh@131 -- # get_notification_count 00:17:14.144 15:07:44 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:14.144 15:07:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.144 15:07:44 -- common/autotest_common.sh@10 -- # set +x 00:17:14.144 15:07:44 -- host/discovery.sh@74 -- # jq '. | length' 00:17:14.144 15:07:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.144 15:07:44 -- host/discovery.sh@74 -- # notification_count=0 00:17:14.144 15:07:44 -- host/discovery.sh@75 -- # notify_id=2 00:17:14.144 15:07:44 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:17:14.144 15:07:44 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:17:14.144 15:07:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.144 15:07:44 -- common/autotest_common.sh@10 -- # set +x 00:17:14.144 15:07:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.144 15:07:44 -- host/discovery.sh@135 -- # sleep 1 00:17:15.081 15:07:45 -- host/discovery.sh@136 -- # get_subsystem_names 00:17:15.081 15:07:45 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:17:15.081 15:07:45 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:17:15.081 15:07:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.081 15:07:45 -- host/discovery.sh@59 -- # sort 00:17:15.081 15:07:45 -- common/autotest_common.sh@10 -- # set +x 00:17:15.081 15:07:45 -- host/discovery.sh@59 -- # xargs 00:17:15.081 15:07:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.081 15:07:45 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:17:15.081 15:07:45 -- host/discovery.sh@137 -- # get_bdev_list 00:17:15.081 15:07:45 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:15.081 15:07:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.081 15:07:45 -- common/autotest_common.sh@10 -- # set +x 00:17:15.081 15:07:45 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:15.081 15:07:45 -- host/discovery.sh@55 -- # xargs 00:17:15.081 15:07:45 -- host/discovery.sh@55 -- # sort 00:17:15.081 15:07:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.340 15:07:45 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:17:15.340 15:07:45 -- host/discovery.sh@138 -- # get_notification_count 00:17:15.340 15:07:45 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:17:15.340 15:07:45 -- host/discovery.sh@74 -- # jq '. | length' 00:17:15.340 15:07:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.340 15:07:45 -- common/autotest_common.sh@10 -- # set +x 00:17:15.340 15:07:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.340 15:07:45 -- host/discovery.sh@74 -- # notification_count=2 00:17:15.340 15:07:45 -- host/discovery.sh@75 -- # notify_id=4 00:17:15.340 15:07:45 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:17:15.340 15:07:45 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:15.340 15:07:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.340 15:07:45 -- common/autotest_common.sh@10 -- # set +x 00:17:16.273 [2024-11-20 15:07:46.988825] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:16.273 [2024-11-20 15:07:46.988870] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:16.273 [2024-11-20 15:07:46.988889] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:16.273 [2024-11-20 15:07:46.994855] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:17:16.273 [2024-11-20 15:07:47.054289] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:16.273 [2024-11-20 15:07:47.054354] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:17:16.273 15:07:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.273 15:07:47 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:16.273 15:07:47 -- common/autotest_common.sh@650 -- # local es=0 00:17:16.273 15:07:47 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:16.273 15:07:47 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:16.273 15:07:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.273 15:07:47 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:16.273 15:07:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.273 15:07:47 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:16.273 15:07:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.273 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:17:16.273 request: 00:17:16.273 { 00:17:16.273 "name": "nvme", 00:17:16.273 "trtype": "tcp", 00:17:16.273 "traddr": "10.0.0.2", 00:17:16.273 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:16.273 "adrfam": "ipv4", 00:17:16.273 "trsvcid": "8009", 00:17:16.273 "wait_for_attach": true, 00:17:16.273 "method": "bdev_nvme_start_discovery", 00:17:16.273 "req_id": 1 00:17:16.273 } 00:17:16.273 Got JSON-RPC error response 00:17:16.273 response: 00:17:16.273 { 00:17:16.273 "code": -17, 00:17:16.273 "message": "File exists" 00:17:16.273 } 00:17:16.273 15:07:47 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:16.273 15:07:47 -- common/autotest_common.sh@653 -- # es=1 00:17:16.273 15:07:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:16.273 15:07:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:16.273 15:07:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:16.273 15:07:47 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:17:16.273 15:07:47 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:16.273 15:07:47 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:16.273 15:07:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.273 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:17:16.273 15:07:47 -- host/discovery.sh@67 -- # sort 00:17:16.273 15:07:47 -- host/discovery.sh@67 -- # xargs 00:17:16.531 15:07:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.531 15:07:47 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:17:16.531 15:07:47 -- host/discovery.sh@147 -- # get_bdev_list 00:17:16.531 15:07:47 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:16.531 15:07:47 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:16.531 15:07:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.531 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:17:16.531 15:07:47 -- host/discovery.sh@55 -- # sort 00:17:16.531 15:07:47 -- host/discovery.sh@55 -- # xargs 00:17:16.531 15:07:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.531 15:07:47 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:16.531 15:07:47 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:16.531 15:07:47 -- common/autotest_common.sh@650 -- # local es=0 00:17:16.531 15:07:47 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:16.531 15:07:47 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:16.531 15:07:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.532 15:07:47 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:16.532 15:07:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.532 15:07:47 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:17:16.532 15:07:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.532 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:17:16.532 request: 00:17:16.532 { 00:17:16.532 "name": "nvme_second", 00:17:16.532 "trtype": "tcp", 00:17:16.532 "traddr": "10.0.0.2", 00:17:16.532 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:16.532 "adrfam": "ipv4", 00:17:16.532 "trsvcid": "8009", 00:17:16.532 "wait_for_attach": true, 00:17:16.532 "method": "bdev_nvme_start_discovery", 00:17:16.532 "req_id": 1 00:17:16.532 } 00:17:16.532 Got JSON-RPC error response 00:17:16.532 response: 00:17:16.532 { 00:17:16.532 "code": -17, 00:17:16.532 "message": "File exists" 00:17:16.532 } 00:17:16.532 15:07:47 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:16.532 15:07:47 -- common/autotest_common.sh@653 -- # es=1 00:17:16.532 15:07:47 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:16.532 15:07:47 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:16.532 15:07:47 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:16.532 15:07:47 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:17:16.532 15:07:47 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:16.532 15:07:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.532 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:17:16.532 15:07:47 -- host/discovery.sh@67 -- # sort 00:17:16.532 15:07:47 -- host/discovery.sh@67 -- # xargs 00:17:16.532 15:07:47 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:16.532 15:07:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.532 15:07:47 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:17:16.532 15:07:47 -- host/discovery.sh@153 -- # get_bdev_list 00:17:16.532 15:07:47 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:16.532 15:07:47 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:17:16.532 15:07:47 -- host/discovery.sh@55 -- # sort 00:17:16.532 15:07:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.532 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:17:16.532 15:07:47 -- host/discovery.sh@55 -- # xargs 00:17:16.532 15:07:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.532 15:07:47 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:17:16.532 15:07:47 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:16.532 15:07:47 -- common/autotest_common.sh@650 -- # local es=0 00:17:16.532 15:07:47 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:16.532 15:07:47 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:16.532 15:07:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.532 15:07:47 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:16.532 15:07:47 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:16.532 15:07:47 -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:17:16.532 15:07:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.532 15:07:47 -- common/autotest_common.sh@10 -- # set +x 00:17:17.905 [2024-11-20 15:07:48.292217] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:17.905 [2024-11-20 15:07:48.292376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:17.905 [2024-11-20 15:07:48.292428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:17.905 [2024-11-20 15:07:48.292446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f20300 with addr=10.0.0.2, port=8010 00:17:17.905 [2024-11-20 15:07:48.292466] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:17.905 [2024-11-20 15:07:48.292477] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:17.905 [2024-11-20 15:07:48.292487] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:17:18.837 [2024-11-20 15:07:49.292217] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.837 [2024-11-20 15:07:49.292328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.837 [2024-11-20 15:07:49.292375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:18.837 [2024-11-20 15:07:49.292392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f20300 with addr=10.0.0.2, port=8010 00:17:18.837 [2024-11-20 15:07:49.292411] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:17:18.837 [2024-11-20 15:07:49.292421] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:17:18.837 [2024-11-20 15:07:49.292431] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:17:19.772 [2024-11-20 15:07:50.292059] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:17:19.772 request: 00:17:19.772 { 00:17:19.772 "name": "nvme_second", 00:17:19.772 "trtype": "tcp", 00:17:19.772 "traddr": "10.0.0.2", 00:17:19.772 "hostnqn": "nqn.2021-12.io.spdk:test", 00:17:19.772 "adrfam": "ipv4", 00:17:19.772 "trsvcid": "8010", 00:17:19.772 "attach_timeout_ms": 3000, 00:17:19.772 "method": "bdev_nvme_start_discovery", 00:17:19.772 "req_id": 1 00:17:19.772 } 00:17:19.772 Got JSON-RPC error response 00:17:19.772 response: 00:17:19.772 { 00:17:19.772 "code": -110, 00:17:19.772 "message": "Connection timed out" 00:17:19.772 } 00:17:19.772 15:07:50 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:19.772 15:07:50 -- common/autotest_common.sh@653 -- # es=1 00:17:19.772 15:07:50 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:19.772 15:07:50 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:19.772 15:07:50 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:19.772 15:07:50 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:17:19.772 15:07:50 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:17:19.772 15:07:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.772 15:07:50 -- common/autotest_common.sh@10 -- # set +x 00:17:19.772 15:07:50 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:17:19.772 15:07:50 -- host/discovery.sh@67 -- # xargs 00:17:19.772 15:07:50 -- host/discovery.sh@67 -- # sort 00:17:19.772 15:07:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.772 15:07:50 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:17:19.772 15:07:50 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:17:19.772 15:07:50 -- host/discovery.sh@162 -- # kill 82543 00:17:19.772 15:07:50 -- host/discovery.sh@163 -- # nvmftestfini 00:17:19.772 15:07:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:19.772 15:07:50 -- nvmf/common.sh@116 -- # sync 00:17:19.772 15:07:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:19.772 15:07:50 -- nvmf/common.sh@119 -- # set +e 00:17:19.772 15:07:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:19.772 15:07:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:19.772 rmmod nvme_tcp 00:17:19.772 rmmod nvme_fabrics 00:17:19.772 rmmod nvme_keyring 00:17:19.772 15:07:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:19.772 15:07:50 -- nvmf/common.sh@123 -- # set -e 00:17:19.772 15:07:50 -- nvmf/common.sh@124 -- # return 0 00:17:19.772 15:07:50 -- nvmf/common.sh@477 -- # '[' -n 82518 ']' 00:17:19.772 15:07:50 -- nvmf/common.sh@478 -- # killprocess 82518 00:17:19.772 15:07:50 -- common/autotest_common.sh@936 -- # '[' -z 82518 ']' 00:17:19.772 15:07:50 -- common/autotest_common.sh@940 -- # kill -0 82518 00:17:19.772 15:07:50 -- common/autotest_common.sh@941 -- # uname 00:17:19.772 15:07:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:19.772 15:07:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82518 00:17:19.772 15:07:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:19.772 killing process with pid 82518 00:17:19.772 15:07:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:19.772 15:07:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82518' 00:17:19.772 15:07:50 -- common/autotest_common.sh@955 -- # kill 82518 00:17:19.772 15:07:50 -- common/autotest_common.sh@960 -- # wait 82518 00:17:20.030 15:07:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:20.030 15:07:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:20.030 15:07:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:20.030 15:07:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:20.030 15:07:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:20.030 15:07:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.030 15:07:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.030 15:07:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.030 15:07:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:20.030 00:17:20.030 real 0m12.532s 00:17:20.030 user 0m24.246s 00:17:20.030 sys 0m2.151s 00:17:20.030 15:07:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:20.030 15:07:50 -- common/autotest_common.sh@10 -- # set +x 00:17:20.030 ************************************ 00:17:20.030 END TEST nvmf_discovery 00:17:20.030 ************************************ 00:17:20.030 15:07:50 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:20.030 15:07:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:20.030 15:07:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:20.030 15:07:50 -- common/autotest_common.sh@10 -- # set +x 00:17:20.030 ************************************ 00:17:20.030 START TEST nvmf_discovery_remove_ifc 00:17:20.030 ************************************ 00:17:20.030 15:07:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:17:20.030 * Looking for test storage... 00:17:20.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:20.030 15:07:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:20.030 15:07:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:20.030 15:07:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:20.289 15:07:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:20.289 15:07:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:20.289 15:07:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:20.289 15:07:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:20.289 15:07:50 -- scripts/common.sh@335 -- # IFS=.-: 00:17:20.289 15:07:50 -- scripts/common.sh@335 -- # read -ra ver1 00:17:20.289 15:07:50 -- scripts/common.sh@336 -- # IFS=.-: 00:17:20.289 15:07:50 -- scripts/common.sh@336 -- # read -ra ver2 00:17:20.289 15:07:50 -- scripts/common.sh@337 -- # local 'op=<' 00:17:20.289 15:07:50 -- scripts/common.sh@339 -- # ver1_l=2 00:17:20.289 15:07:50 -- scripts/common.sh@340 -- # ver2_l=1 00:17:20.289 15:07:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:20.289 15:07:50 -- scripts/common.sh@343 -- # case "$op" in 00:17:20.289 15:07:50 -- scripts/common.sh@344 -- # : 1 00:17:20.289 15:07:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:20.289 15:07:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:20.289 15:07:50 -- scripts/common.sh@364 -- # decimal 1 00:17:20.289 15:07:50 -- scripts/common.sh@352 -- # local d=1 00:17:20.289 15:07:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:20.289 15:07:50 -- scripts/common.sh@354 -- # echo 1 00:17:20.289 15:07:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:20.289 15:07:50 -- scripts/common.sh@365 -- # decimal 2 00:17:20.289 15:07:50 -- scripts/common.sh@352 -- # local d=2 00:17:20.289 15:07:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:20.289 15:07:50 -- scripts/common.sh@354 -- # echo 2 00:17:20.289 15:07:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:20.289 15:07:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:20.289 15:07:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:20.289 15:07:50 -- scripts/common.sh@367 -- # return 0 00:17:20.289 15:07:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:20.289 15:07:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:20.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.289 --rc genhtml_branch_coverage=1 00:17:20.289 --rc genhtml_function_coverage=1 00:17:20.289 --rc genhtml_legend=1 00:17:20.289 --rc geninfo_all_blocks=1 00:17:20.289 --rc geninfo_unexecuted_blocks=1 00:17:20.289 00:17:20.289 ' 00:17:20.289 15:07:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:20.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.289 --rc genhtml_branch_coverage=1 00:17:20.289 --rc genhtml_function_coverage=1 00:17:20.289 --rc genhtml_legend=1 00:17:20.289 --rc geninfo_all_blocks=1 00:17:20.289 --rc geninfo_unexecuted_blocks=1 00:17:20.289 00:17:20.289 ' 00:17:20.289 15:07:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:20.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.289 --rc genhtml_branch_coverage=1 00:17:20.289 --rc genhtml_function_coverage=1 00:17:20.289 --rc genhtml_legend=1 00:17:20.289 --rc geninfo_all_blocks=1 00:17:20.289 --rc geninfo_unexecuted_blocks=1 00:17:20.289 00:17:20.289 ' 00:17:20.289 15:07:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:20.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.289 --rc genhtml_branch_coverage=1 00:17:20.289 --rc genhtml_function_coverage=1 00:17:20.289 --rc genhtml_legend=1 00:17:20.289 --rc geninfo_all_blocks=1 00:17:20.289 --rc geninfo_unexecuted_blocks=1 00:17:20.289 00:17:20.289 ' 00:17:20.289 15:07:50 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:20.289 15:07:50 -- nvmf/common.sh@7 -- # uname -s 00:17:20.289 15:07:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.289 15:07:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.289 15:07:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.289 15:07:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.289 15:07:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:20.289 15:07:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:20.289 15:07:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.290 15:07:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:20.290 15:07:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.290 15:07:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:20.290 15:07:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:17:20.290 15:07:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:17:20.290 15:07:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.290 15:07:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:20.290 15:07:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:20.290 15:07:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:20.290 15:07:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.290 15:07:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.290 15:07:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.290 15:07:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.290 15:07:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.290 15:07:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.290 15:07:50 -- paths/export.sh@5 -- # export PATH 00:17:20.290 15:07:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.290 15:07:50 -- nvmf/common.sh@46 -- # : 0 00:17:20.290 15:07:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:20.290 15:07:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:20.290 15:07:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:20.290 15:07:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.290 15:07:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.290 15:07:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:20.290 15:07:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:20.290 15:07:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:20.290 15:07:50 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:17:20.290 15:07:50 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:17:20.290 15:07:50 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:17:20.290 15:07:50 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:20.290 15:07:50 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:17:20.290 15:07:50 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:17:20.290 15:07:50 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:17:20.290 15:07:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:20.290 15:07:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:20.290 15:07:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:20.290 15:07:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:20.290 15:07:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:20.290 15:07:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.290 15:07:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.290 15:07:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.290 15:07:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:20.290 15:07:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:20.290 15:07:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:20.290 15:07:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:20.290 15:07:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:20.290 15:07:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:20.290 15:07:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.290 15:07:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:20.290 15:07:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:20.290 15:07:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:20.290 15:07:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:20.290 15:07:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:20.290 15:07:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:20.290 15:07:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.290 15:07:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:20.290 15:07:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:20.290 15:07:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:20.290 15:07:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:20.290 15:07:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:20.290 15:07:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:20.290 Cannot find device "nvmf_tgt_br" 00:17:20.290 15:07:50 -- nvmf/common.sh@154 -- # true 00:17:20.290 15:07:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:20.290 Cannot find device "nvmf_tgt_br2" 00:17:20.290 15:07:50 -- nvmf/common.sh@155 -- # true 00:17:20.290 15:07:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:20.290 15:07:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:20.290 Cannot find device "nvmf_tgt_br" 00:17:20.290 15:07:51 -- nvmf/common.sh@157 -- # true 00:17:20.290 15:07:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:20.290 Cannot find device "nvmf_tgt_br2" 00:17:20.290 15:07:51 -- nvmf/common.sh@158 -- # true 00:17:20.290 15:07:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:20.290 15:07:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:20.549 15:07:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:20.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.549 15:07:51 -- nvmf/common.sh@161 -- # true 00:17:20.549 15:07:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:20.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:20.549 15:07:51 -- nvmf/common.sh@162 -- # true 00:17:20.549 15:07:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:20.549 15:07:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:20.549 15:07:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:20.549 15:07:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:20.549 15:07:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:20.549 15:07:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:20.549 15:07:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:20.549 15:07:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:20.549 15:07:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:20.549 15:07:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:20.549 15:07:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:20.549 15:07:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:20.549 15:07:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:20.549 15:07:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:20.549 15:07:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:20.549 15:07:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:20.549 15:07:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:20.549 15:07:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:20.549 15:07:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:20.549 15:07:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:20.549 15:07:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:20.549 15:07:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:20.549 15:07:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:20.549 15:07:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:20.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:17:20.549 00:17:20.549 --- 10.0.0.2 ping statistics --- 00:17:20.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.549 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:20.549 15:07:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:20.549 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:20.549 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:17:20.549 00:17:20.549 --- 10.0.0.3 ping statistics --- 00:17:20.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.549 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:20.549 15:07:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:20.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:17:20.549 00:17:20.549 --- 10.0.0.1 ping statistics --- 00:17:20.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.549 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:17:20.549 15:07:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.549 15:07:51 -- nvmf/common.sh@421 -- # return 0 00:17:20.549 15:07:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:20.550 15:07:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.550 15:07:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:20.550 15:07:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:20.550 15:07:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.550 15:07:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:20.550 15:07:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:20.550 15:07:51 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:17:20.550 15:07:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:20.550 15:07:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:20.550 15:07:51 -- common/autotest_common.sh@10 -- # set +x 00:17:20.550 15:07:51 -- nvmf/common.sh@469 -- # nvmfpid=83043 00:17:20.550 15:07:51 -- nvmf/common.sh@470 -- # waitforlisten 83043 00:17:20.550 15:07:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:20.550 15:07:51 -- common/autotest_common.sh@829 -- # '[' -z 83043 ']' 00:17:20.550 15:07:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.550 15:07:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.550 15:07:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.550 15:07:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.550 15:07:51 -- common/autotest_common.sh@10 -- # set +x 00:17:20.808 [2024-11-20 15:07:51.360848] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:20.808 [2024-11-20 15:07:51.360965] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.808 [2024-11-20 15:07:51.495487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.808 [2024-11-20 15:07:51.531621] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:20.808 [2024-11-20 15:07:51.531827] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.808 [2024-11-20 15:07:51.531846] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.808 [2024-11-20 15:07:51.531855] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.808 [2024-11-20 15:07:51.531882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.743 15:07:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:21.743 15:07:52 -- common/autotest_common.sh@862 -- # return 0 00:17:21.743 15:07:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:21.744 15:07:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:21.744 15:07:52 -- common/autotest_common.sh@10 -- # set +x 00:17:21.744 15:07:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.744 15:07:52 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:17:21.744 15:07:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.744 15:07:52 -- common/autotest_common.sh@10 -- # set +x 00:17:21.744 [2024-11-20 15:07:52.390961] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.744 [2024-11-20 15:07:52.399089] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:17:21.744 null0 00:17:21.744 [2024-11-20 15:07:52.431085] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.744 15:07:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.744 15:07:52 -- host/discovery_remove_ifc.sh@59 -- # hostpid=83075 00:17:21.744 15:07:52 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:17:21.744 15:07:52 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 83075 /tmp/host.sock 00:17:21.744 15:07:52 -- common/autotest_common.sh@829 -- # '[' -z 83075 ']' 00:17:21.744 15:07:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:17:21.744 15:07:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:21.744 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:17:21.744 15:07:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:17:21.744 15:07:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:21.744 15:07:52 -- common/autotest_common.sh@10 -- # set +x 00:17:21.744 [2024-11-20 15:07:52.496622] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:21.744 [2024-11-20 15:07:52.496718] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83075 ] 00:17:22.003 [2024-11-20 15:07:52.633662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.003 [2024-11-20 15:07:52.668392] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:22.003 [2024-11-20 15:07:52.668557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.003 15:07:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.003 15:07:52 -- common/autotest_common.sh@862 -- # return 0 00:17:22.003 15:07:52 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:22.003 15:07:52 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:17:22.003 15:07:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.003 15:07:52 -- common/autotest_common.sh@10 -- # set +x 00:17:22.003 15:07:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.003 15:07:52 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:17:22.003 15:07:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.003 15:07:52 -- common/autotest_common.sh@10 -- # set +x 00:17:22.003 15:07:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.003 15:07:52 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:17:22.003 15:07:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.003 15:07:52 -- common/autotest_common.sh@10 -- # set +x 00:17:23.377 [2024-11-20 15:07:53.814160] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:23.377 [2024-11-20 15:07:53.814212] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:23.377 [2024-11-20 15:07:53.814234] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:23.377 [2024-11-20 15:07:53.820227] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:17:23.378 [2024-11-20 15:07:53.876499] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:23.378 [2024-11-20 15:07:53.876579] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:23.378 [2024-11-20 15:07:53.876607] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:23.378 [2024-11-20 15:07:53.876626] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:17:23.378 [2024-11-20 15:07:53.876667] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:23.378 15:07:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.378 15:07:53 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:17:23.378 15:07:53 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:23.378 15:07:53 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:23.378 15:07:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.378 15:07:53 -- common/autotest_common.sh@10 -- # set +x 00:17:23.378 15:07:53 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:23.378 [2024-11-20 15:07:53.882699] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2302af0 was disconnected and freed. delete nvme_qpair. 00:17:23.378 15:07:53 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:23.378 15:07:53 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:23.378 15:07:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.378 15:07:53 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:17:23.378 15:07:53 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:17:23.378 15:07:53 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:17:23.378 15:07:53 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:17:23.378 15:07:53 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:23.378 15:07:53 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:23.378 15:07:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.378 15:07:53 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:23.378 15:07:53 -- common/autotest_common.sh@10 -- # set +x 00:17:23.378 15:07:53 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:23.378 15:07:53 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:23.378 15:07:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.378 15:07:53 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:23.378 15:07:53 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:24.310 15:07:54 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:24.310 15:07:54 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:24.310 15:07:54 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:24.310 15:07:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.310 15:07:54 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:24.310 15:07:54 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:24.310 15:07:54 -- common/autotest_common.sh@10 -- # set +x 00:17:24.310 15:07:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.310 15:07:55 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:24.310 15:07:55 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:25.246 15:07:56 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:25.504 15:07:56 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:25.504 15:07:56 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:25.504 15:07:56 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:25.504 15:07:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.504 15:07:56 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:25.504 15:07:56 -- common/autotest_common.sh@10 -- # set +x 00:17:25.504 15:07:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.504 15:07:56 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:25.504 15:07:56 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:26.435 15:07:57 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:26.435 15:07:57 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:26.435 15:07:57 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:26.435 15:07:57 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:26.435 15:07:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.435 15:07:57 -- common/autotest_common.sh@10 -- # set +x 00:17:26.435 15:07:57 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:26.435 15:07:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.435 15:07:57 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:26.435 15:07:57 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:27.806 15:07:58 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:27.806 15:07:58 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:27.806 15:07:58 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:27.806 15:07:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.806 15:07:58 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:27.806 15:07:58 -- common/autotest_common.sh@10 -- # set +x 00:17:27.806 15:07:58 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:27.806 15:07:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.806 15:07:58 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:27.806 15:07:58 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:28.762 15:07:59 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:28.762 15:07:59 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:28.762 15:07:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.762 15:07:59 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:28.762 15:07:59 -- common/autotest_common.sh@10 -- # set +x 00:17:28.762 15:07:59 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:28.762 15:07:59 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:28.762 15:07:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.762 15:07:59 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:28.762 15:07:59 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:28.762 [2024-11-20 15:07:59.304194] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:17:28.762 [2024-11-20 15:07:59.304266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.762 [2024-11-20 15:07:59.304283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.762 [2024-11-20 15:07:59.304297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.762 [2024-11-20 15:07:59.304307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.762 [2024-11-20 15:07:59.304317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.763 [2024-11-20 15:07:59.304325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.763 [2024-11-20 15:07:59.304336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.763 [2024-11-20 15:07:59.304345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.763 [2024-11-20 15:07:59.304355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.763 [2024-11-20 15:07:59.304364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.763 [2024-11-20 15:07:59.304373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c7890 is same with the state(5) to be set 00:17:28.763 [2024-11-20 15:07:59.314195] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c7890 (9): Bad file descriptor 00:17:28.763 [2024-11-20 15:07:59.324231] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:29.697 15:08:00 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:29.697 15:08:00 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:29.697 15:08:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.697 15:08:00 -- common/autotest_common.sh@10 -- # set +x 00:17:29.697 15:08:00 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:29.697 15:08:00 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:29.697 15:08:00 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:29.697 [2024-11-20 15:08:00.355753] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:17:30.632 [2024-11-20 15:08:01.379760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:17:32.004 [2024-11-20 15:08:02.403734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:17:32.004 [2024-11-20 15:08:02.404140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22c7890 with addr=10.0.0.2, port=4420 00:17:32.004 [2024-11-20 15:08:02.404190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22c7890 is same with the state(5) to be set 00:17:32.004 [2024-11-20 15:08:02.404248] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:17:32.004 [2024-11-20 15:08:02.404272] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:17:32.004 [2024-11-20 15:08:02.404290] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:17:32.004 [2024-11-20 15:08:02.404310] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:17:32.004 [2024-11-20 15:08:02.404955] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c7890 (9): Bad file descriptor 00:17:32.004 [2024-11-20 15:08:02.405014] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:32.004 [2024-11-20 15:08:02.405065] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:17:32.004 [2024-11-20 15:08:02.405126] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.004 [2024-11-20 15:08:02.405156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.004 [2024-11-20 15:08:02.405182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.004 [2024-11-20 15:08:02.405202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.004 [2024-11-20 15:08:02.405222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.004 [2024-11-20 15:08:02.405242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.004 [2024-11-20 15:08:02.405264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.004 [2024-11-20 15:08:02.405283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.004 [2024-11-20 15:08:02.405306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.004 [2024-11-20 15:08:02.405325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.004 [2024-11-20 15:08:02.405345] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:17:32.004 [2024-11-20 15:08:02.405400] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c6ef0 (9): Bad file descriptor 00:17:32.004 [2024-11-20 15:08:02.406409] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:17:32.004 [2024-11-20 15:08:02.406447] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:17:32.004 15:08:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.004 15:08:02 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:17:32.004 15:08:02 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:32.939 15:08:03 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:32.939 15:08:03 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:32.939 15:08:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.939 15:08:03 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:32.939 15:08:03 -- common/autotest_common.sh@10 -- # set +x 00:17:32.939 15:08:03 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:32.939 15:08:03 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:32.939 15:08:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.939 15:08:03 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:17:32.939 15:08:03 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:32.939 15:08:03 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:32.939 15:08:03 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:17:32.939 15:08:03 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:32.939 15:08:03 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:32.939 15:08:03 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:32.939 15:08:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.939 15:08:03 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:32.939 15:08:03 -- common/autotest_common.sh@10 -- # set +x 00:17:32.939 15:08:03 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:32.939 15:08:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.939 15:08:03 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:17:32.939 15:08:03 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:17:33.874 [2024-11-20 15:08:04.409893] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:17:33.874 [2024-11-20 15:08:04.410132] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:17:33.874 [2024-11-20 15:08:04.410197] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:17:33.874 [2024-11-20 15:08:04.415932] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:17:33.874 [2024-11-20 15:08:04.471389] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:17:33.874 [2024-11-20 15:08:04.471655] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:17:33.874 [2024-11-20 15:08:04.471738] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:17:33.874 [2024-11-20 15:08:04.471850] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:17:33.874 [2024-11-20 15:08:04.471914] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:17:33.874 [2024-11-20 15:08:04.478592] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x22b6e30 was disconnected and freed. delete nvme_qpair. 00:17:33.874 15:08:04 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:17:33.874 15:08:04 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:17:33.874 15:08:04 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:17:33.874 15:08:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.874 15:08:04 -- common/autotest_common.sh@10 -- # set +x 00:17:33.874 15:08:04 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:17:33.874 15:08:04 -- host/discovery_remove_ifc.sh@29 -- # sort 00:17:33.874 15:08:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.874 15:08:04 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:17:33.874 15:08:04 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:17:33.874 15:08:04 -- host/discovery_remove_ifc.sh@90 -- # killprocess 83075 00:17:33.874 15:08:04 -- common/autotest_common.sh@936 -- # '[' -z 83075 ']' 00:17:33.874 15:08:04 -- common/autotest_common.sh@940 -- # kill -0 83075 00:17:33.874 15:08:04 -- common/autotest_common.sh@941 -- # uname 00:17:33.874 15:08:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:33.874 15:08:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83075 00:17:34.133 killing process with pid 83075 00:17:34.133 15:08:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:34.133 15:08:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:34.133 15:08:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83075' 00:17:34.133 15:08:04 -- common/autotest_common.sh@955 -- # kill 83075 00:17:34.133 15:08:04 -- common/autotest_common.sh@960 -- # wait 83075 00:17:34.133 15:08:04 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:17:34.133 15:08:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:34.133 15:08:04 -- nvmf/common.sh@116 -- # sync 00:17:34.133 15:08:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:34.133 15:08:04 -- nvmf/common.sh@119 -- # set +e 00:17:34.133 15:08:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:34.133 15:08:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:34.133 rmmod nvme_tcp 00:17:34.133 rmmod nvme_fabrics 00:17:34.133 rmmod nvme_keyring 00:17:34.393 15:08:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:34.393 15:08:04 -- nvmf/common.sh@123 -- # set -e 00:17:34.393 15:08:04 -- nvmf/common.sh@124 -- # return 0 00:17:34.393 15:08:04 -- nvmf/common.sh@477 -- # '[' -n 83043 ']' 00:17:34.393 15:08:04 -- nvmf/common.sh@478 -- # killprocess 83043 00:17:34.393 15:08:04 -- common/autotest_common.sh@936 -- # '[' -z 83043 ']' 00:17:34.393 15:08:04 -- common/autotest_common.sh@940 -- # kill -0 83043 00:17:34.393 15:08:04 -- common/autotest_common.sh@941 -- # uname 00:17:34.393 15:08:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:34.393 15:08:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83043 00:17:34.393 killing process with pid 83043 00:17:34.393 15:08:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:34.393 15:08:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:34.393 15:08:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83043' 00:17:34.393 15:08:04 -- common/autotest_common.sh@955 -- # kill 83043 00:17:34.393 15:08:04 -- common/autotest_common.sh@960 -- # wait 83043 00:17:34.393 15:08:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:34.393 15:08:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:34.393 15:08:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:34.393 15:08:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:34.393 15:08:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:34.393 15:08:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.393 15:08:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.393 15:08:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.393 15:08:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:34.393 00:17:34.393 real 0m14.414s 00:17:34.393 user 0m22.689s 00:17:34.393 sys 0m2.383s 00:17:34.393 ************************************ 00:17:34.393 END TEST nvmf_discovery_remove_ifc 00:17:34.393 ************************************ 00:17:34.393 15:08:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:34.393 15:08:05 -- common/autotest_common.sh@10 -- # set +x 00:17:34.652 15:08:05 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:17:34.652 15:08:05 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:34.652 15:08:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:34.652 15:08:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:34.652 15:08:05 -- common/autotest_common.sh@10 -- # set +x 00:17:34.652 ************************************ 00:17:34.652 START TEST nvmf_digest 00:17:34.652 ************************************ 00:17:34.652 15:08:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:34.652 * Looking for test storage... 00:17:34.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:34.652 15:08:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:34.652 15:08:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:34.652 15:08:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:34.652 15:08:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:34.652 15:08:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:34.652 15:08:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:34.652 15:08:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:34.652 15:08:05 -- scripts/common.sh@335 -- # IFS=.-: 00:17:34.652 15:08:05 -- scripts/common.sh@335 -- # read -ra ver1 00:17:34.652 15:08:05 -- scripts/common.sh@336 -- # IFS=.-: 00:17:34.652 15:08:05 -- scripts/common.sh@336 -- # read -ra ver2 00:17:34.652 15:08:05 -- scripts/common.sh@337 -- # local 'op=<' 00:17:34.652 15:08:05 -- scripts/common.sh@339 -- # ver1_l=2 00:17:34.652 15:08:05 -- scripts/common.sh@340 -- # ver2_l=1 00:17:34.652 15:08:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:34.652 15:08:05 -- scripts/common.sh@343 -- # case "$op" in 00:17:34.652 15:08:05 -- scripts/common.sh@344 -- # : 1 00:17:34.652 15:08:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:34.652 15:08:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.652 15:08:05 -- scripts/common.sh@364 -- # decimal 1 00:17:34.652 15:08:05 -- scripts/common.sh@352 -- # local d=1 00:17:34.652 15:08:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:34.652 15:08:05 -- scripts/common.sh@354 -- # echo 1 00:17:34.652 15:08:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:34.652 15:08:05 -- scripts/common.sh@365 -- # decimal 2 00:17:34.652 15:08:05 -- scripts/common.sh@352 -- # local d=2 00:17:34.652 15:08:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:34.652 15:08:05 -- scripts/common.sh@354 -- # echo 2 00:17:34.652 15:08:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:34.652 15:08:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:34.652 15:08:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:34.652 15:08:05 -- scripts/common.sh@367 -- # return 0 00:17:34.652 15:08:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:34.652 15:08:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:34.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.652 --rc genhtml_branch_coverage=1 00:17:34.652 --rc genhtml_function_coverage=1 00:17:34.652 --rc genhtml_legend=1 00:17:34.652 --rc geninfo_all_blocks=1 00:17:34.652 --rc geninfo_unexecuted_blocks=1 00:17:34.652 00:17:34.652 ' 00:17:34.652 15:08:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:34.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.652 --rc genhtml_branch_coverage=1 00:17:34.652 --rc genhtml_function_coverage=1 00:17:34.652 --rc genhtml_legend=1 00:17:34.652 --rc geninfo_all_blocks=1 00:17:34.652 --rc geninfo_unexecuted_blocks=1 00:17:34.652 00:17:34.652 ' 00:17:34.652 15:08:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:34.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.652 --rc genhtml_branch_coverage=1 00:17:34.652 --rc genhtml_function_coverage=1 00:17:34.652 --rc genhtml_legend=1 00:17:34.652 --rc geninfo_all_blocks=1 00:17:34.652 --rc geninfo_unexecuted_blocks=1 00:17:34.652 00:17:34.652 ' 00:17:34.652 15:08:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:34.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.652 --rc genhtml_branch_coverage=1 00:17:34.652 --rc genhtml_function_coverage=1 00:17:34.652 --rc genhtml_legend=1 00:17:34.652 --rc geninfo_all_blocks=1 00:17:34.652 --rc geninfo_unexecuted_blocks=1 00:17:34.652 00:17:34.652 ' 00:17:34.652 15:08:05 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:34.652 15:08:05 -- nvmf/common.sh@7 -- # uname -s 00:17:34.652 15:08:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.652 15:08:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.652 15:08:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.652 15:08:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.652 15:08:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.652 15:08:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.652 15:08:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.652 15:08:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.652 15:08:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.652 15:08:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.652 15:08:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:17:34.652 15:08:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:17:34.652 15:08:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.652 15:08:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.652 15:08:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:34.652 15:08:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:34.652 15:08:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.652 15:08:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.652 15:08:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.652 15:08:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.652 15:08:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.652 15:08:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.652 15:08:05 -- paths/export.sh@5 -- # export PATH 00:17:34.652 15:08:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.652 15:08:05 -- nvmf/common.sh@46 -- # : 0 00:17:34.653 15:08:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:34.653 15:08:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:34.653 15:08:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:34.653 15:08:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.653 15:08:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.653 15:08:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:34.653 15:08:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:34.653 15:08:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:34.653 15:08:05 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:34.653 15:08:05 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:34.653 15:08:05 -- host/digest.sh@16 -- # runtime=2 00:17:34.653 15:08:05 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:17:34.653 15:08:05 -- host/digest.sh@132 -- # nvmftestinit 00:17:34.653 15:08:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:34.653 15:08:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.653 15:08:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:34.653 15:08:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:34.653 15:08:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:34.653 15:08:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.653 15:08:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.653 15:08:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.911 15:08:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:34.911 15:08:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:34.911 15:08:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:34.911 15:08:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:34.911 15:08:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:34.911 15:08:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:34.911 15:08:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.911 15:08:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:34.911 15:08:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:34.911 15:08:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:34.911 15:08:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:34.911 15:08:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:34.911 15:08:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:34.911 15:08:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.911 15:08:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:34.911 15:08:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:34.911 15:08:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:34.911 15:08:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:34.911 15:08:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:34.911 15:08:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:34.911 Cannot find device "nvmf_tgt_br" 00:17:34.911 15:08:05 -- nvmf/common.sh@154 -- # true 00:17:34.911 15:08:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:34.911 Cannot find device "nvmf_tgt_br2" 00:17:34.911 15:08:05 -- nvmf/common.sh@155 -- # true 00:17:34.911 15:08:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:34.911 15:08:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:34.911 Cannot find device "nvmf_tgt_br" 00:17:34.911 15:08:05 -- nvmf/common.sh@157 -- # true 00:17:34.911 15:08:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:34.911 Cannot find device "nvmf_tgt_br2" 00:17:34.911 15:08:05 -- nvmf/common.sh@158 -- # true 00:17:34.911 15:08:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:34.911 15:08:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:34.911 15:08:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:34.911 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:34.911 15:08:05 -- nvmf/common.sh@161 -- # true 00:17:34.911 15:08:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:34.911 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:34.911 15:08:05 -- nvmf/common.sh@162 -- # true 00:17:34.911 15:08:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:34.911 15:08:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:34.911 15:08:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:34.911 15:08:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:34.911 15:08:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:34.911 15:08:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:34.911 15:08:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:34.911 15:08:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:34.911 15:08:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:34.911 15:08:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:34.911 15:08:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:34.911 15:08:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:34.911 15:08:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:34.911 15:08:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:34.911 15:08:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:34.911 15:08:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:35.170 15:08:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:35.170 15:08:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:35.170 15:08:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:35.170 15:08:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:35.170 15:08:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:35.170 15:08:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:35.170 15:08:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:35.170 15:08:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:35.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:17:35.170 00:17:35.170 --- 10.0.0.2 ping statistics --- 00:17:35.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.170 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:17:35.170 15:08:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:35.170 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:35.170 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:17:35.170 00:17:35.170 --- 10.0.0.3 ping statistics --- 00:17:35.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.170 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:35.170 15:08:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:35.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:17:35.170 00:17:35.170 --- 10.0.0.1 ping statistics --- 00:17:35.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.170 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:17:35.170 15:08:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.170 15:08:05 -- nvmf/common.sh@421 -- # return 0 00:17:35.170 15:08:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:35.170 15:08:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.170 15:08:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:35.170 15:08:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:35.170 15:08:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.170 15:08:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:35.170 15:08:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:35.170 15:08:05 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:35.170 15:08:05 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:17:35.170 15:08:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:35.170 15:08:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:35.170 15:08:05 -- common/autotest_common.sh@10 -- # set +x 00:17:35.170 ************************************ 00:17:35.170 START TEST nvmf_digest_clean 00:17:35.170 ************************************ 00:17:35.170 15:08:05 -- common/autotest_common.sh@1114 -- # run_digest 00:17:35.170 15:08:05 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:17:35.170 15:08:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:35.170 15:08:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:35.170 15:08:05 -- common/autotest_common.sh@10 -- # set +x 00:17:35.170 15:08:05 -- nvmf/common.sh@469 -- # nvmfpid=83484 00:17:35.170 15:08:05 -- nvmf/common.sh@470 -- # waitforlisten 83484 00:17:35.170 15:08:05 -- common/autotest_common.sh@829 -- # '[' -z 83484 ']' 00:17:35.170 15:08:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.170 15:08:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.170 15:08:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:35.170 15:08:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.170 15:08:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.170 15:08:05 -- common/autotest_common.sh@10 -- # set +x 00:17:35.170 [2024-11-20 15:08:05.891215] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:35.170 [2024-11-20 15:08:05.891304] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.430 [2024-11-20 15:08:06.023240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.430 [2024-11-20 15:08:06.058112] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:35.430 [2024-11-20 15:08:06.058248] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.430 [2024-11-20 15:08:06.058261] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.430 [2024-11-20 15:08:06.058269] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.430 [2024-11-20 15:08:06.058303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.430 15:08:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:35.430 15:08:06 -- common/autotest_common.sh@862 -- # return 0 00:17:35.430 15:08:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:35.430 15:08:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:35.430 15:08:06 -- common/autotest_common.sh@10 -- # set +x 00:17:35.430 15:08:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.430 15:08:06 -- host/digest.sh@120 -- # common_target_config 00:17:35.430 15:08:06 -- host/digest.sh@43 -- # rpc_cmd 00:17:35.430 15:08:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.430 15:08:06 -- common/autotest_common.sh@10 -- # set +x 00:17:35.430 null0 00:17:35.430 [2024-11-20 15:08:06.221502] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:35.689 [2024-11-20 15:08:06.245648] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:35.689 15:08:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.689 15:08:06 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:17:35.689 15:08:06 -- host/digest.sh@77 -- # local rw bs qd 00:17:35.689 15:08:06 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:35.689 15:08:06 -- host/digest.sh@80 -- # rw=randread 00:17:35.689 15:08:06 -- host/digest.sh@80 -- # bs=4096 00:17:35.689 15:08:06 -- host/digest.sh@80 -- # qd=128 00:17:35.690 15:08:06 -- host/digest.sh@82 -- # bperfpid=83508 00:17:35.690 15:08:06 -- host/digest.sh@83 -- # waitforlisten 83508 /var/tmp/bperf.sock 00:17:35.690 15:08:06 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:35.690 15:08:06 -- common/autotest_common.sh@829 -- # '[' -z 83508 ']' 00:17:35.690 15:08:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:35.690 15:08:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.690 15:08:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:35.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:35.690 15:08:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.690 15:08:06 -- common/autotest_common.sh@10 -- # set +x 00:17:35.690 [2024-11-20 15:08:06.302121] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:35.690 [2024-11-20 15:08:06.302219] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83508 ] 00:17:35.690 [2024-11-20 15:08:06.443156] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.690 [2024-11-20 15:08:06.482780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.624 15:08:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:36.624 15:08:07 -- common/autotest_common.sh@862 -- # return 0 00:17:36.624 15:08:07 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:17:36.624 15:08:07 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:17:36.625 15:08:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:36.883 15:08:07 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:36.883 15:08:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:37.450 nvme0n1 00:17:37.450 15:08:07 -- host/digest.sh@91 -- # bperf_py perform_tests 00:17:37.450 15:08:07 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:37.450 Running I/O for 2 seconds... 00:17:39.354 00:17:39.354 Latency(us) 00:17:39.354 [2024-11-20T15:08:10.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.354 [2024-11-20T15:08:10.158Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:39.354 nvme0n1 : 2.01 14459.78 56.48 0.00 0.00 8845.45 8221.79 19184.17 00:17:39.354 [2024-11-20T15:08:10.158Z] =================================================================================================================== 00:17:39.354 [2024-11-20T15:08:10.158Z] Total : 14459.78 56.48 0.00 0.00 8845.45 8221.79 19184.17 00:17:39.354 0 00:17:39.354 15:08:10 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:17:39.354 15:08:10 -- host/digest.sh@92 -- # get_accel_stats 00:17:39.354 15:08:10 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:39.354 | select(.opcode=="crc32c") 00:17:39.354 | "\(.module_name) \(.executed)"' 00:17:39.354 15:08:10 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:39.354 15:08:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:39.922 15:08:10 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:17:39.922 15:08:10 -- host/digest.sh@93 -- # exp_module=software 00:17:39.922 15:08:10 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:17:39.922 15:08:10 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:39.922 15:08:10 -- host/digest.sh@97 -- # killprocess 83508 00:17:39.922 15:08:10 -- common/autotest_common.sh@936 -- # '[' -z 83508 ']' 00:17:39.922 15:08:10 -- common/autotest_common.sh@940 -- # kill -0 83508 00:17:39.922 15:08:10 -- common/autotest_common.sh@941 -- # uname 00:17:39.922 15:08:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:39.922 15:08:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83508 00:17:39.922 15:08:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:39.922 15:08:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:39.922 15:08:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83508' 00:17:39.922 killing process with pid 83508 00:17:39.922 15:08:10 -- common/autotest_common.sh@955 -- # kill 83508 00:17:39.922 Received shutdown signal, test time was about 2.000000 seconds 00:17:39.922 00:17:39.922 Latency(us) 00:17:39.922 [2024-11-20T15:08:10.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.922 [2024-11-20T15:08:10.726Z] =================================================================================================================== 00:17:39.922 [2024-11-20T15:08:10.726Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:39.922 15:08:10 -- common/autotest_common.sh@960 -- # wait 83508 00:17:39.922 15:08:10 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:17:39.922 15:08:10 -- host/digest.sh@77 -- # local rw bs qd 00:17:39.922 15:08:10 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:39.922 15:08:10 -- host/digest.sh@80 -- # rw=randread 00:17:39.922 15:08:10 -- host/digest.sh@80 -- # bs=131072 00:17:39.922 15:08:10 -- host/digest.sh@80 -- # qd=16 00:17:39.922 15:08:10 -- host/digest.sh@82 -- # bperfpid=83573 00:17:39.922 15:08:10 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:39.922 15:08:10 -- host/digest.sh@83 -- # waitforlisten 83573 /var/tmp/bperf.sock 00:17:39.922 15:08:10 -- common/autotest_common.sh@829 -- # '[' -z 83573 ']' 00:17:39.922 15:08:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:39.922 15:08:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.922 15:08:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:39.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:39.922 15:08:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.922 15:08:10 -- common/autotest_common.sh@10 -- # set +x 00:17:39.922 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:39.922 Zero copy mechanism will not be used. 00:17:39.922 [2024-11-20 15:08:10.642521] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:39.922 [2024-11-20 15:08:10.642608] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83573 ] 00:17:40.181 [2024-11-20 15:08:10.784237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.181 [2024-11-20 15:08:10.818465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.181 15:08:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.181 15:08:10 -- common/autotest_common.sh@862 -- # return 0 00:17:40.181 15:08:10 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:17:40.181 15:08:10 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:17:40.181 15:08:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:40.439 15:08:11 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:40.439 15:08:11 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:41.006 nvme0n1 00:17:41.006 15:08:11 -- host/digest.sh@91 -- # bperf_py perform_tests 00:17:41.006 15:08:11 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:41.006 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:41.006 Zero copy mechanism will not be used. 00:17:41.006 Running I/O for 2 seconds... 00:17:42.909 00:17:42.909 Latency(us) 00:17:42.909 [2024-11-20T15:08:13.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.909 [2024-11-20T15:08:13.713Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:42.909 nvme0n1 : 2.00 7327.51 915.94 0.00 0.00 2180.47 1951.19 8281.37 00:17:42.909 [2024-11-20T15:08:13.713Z] =================================================================================================================== 00:17:42.909 [2024-11-20T15:08:13.713Z] Total : 7327.51 915.94 0.00 0.00 2180.47 1951.19 8281.37 00:17:42.909 0 00:17:42.909 15:08:13 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:17:42.909 15:08:13 -- host/digest.sh@92 -- # get_accel_stats 00:17:42.909 15:08:13 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:42.909 15:08:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:42.909 15:08:13 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:42.909 | select(.opcode=="crc32c") 00:17:42.909 | "\(.module_name) \(.executed)"' 00:17:43.167 15:08:13 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:17:43.167 15:08:13 -- host/digest.sh@93 -- # exp_module=software 00:17:43.167 15:08:13 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:17:43.167 15:08:13 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:43.167 15:08:13 -- host/digest.sh@97 -- # killprocess 83573 00:17:43.167 15:08:13 -- common/autotest_common.sh@936 -- # '[' -z 83573 ']' 00:17:43.167 15:08:13 -- common/autotest_common.sh@940 -- # kill -0 83573 00:17:43.167 15:08:13 -- common/autotest_common.sh@941 -- # uname 00:17:43.167 15:08:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:43.167 15:08:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83573 00:17:43.426 15:08:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:43.426 15:08:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:43.426 15:08:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83573' 00:17:43.426 killing process with pid 83573 00:17:43.426 Received shutdown signal, test time was about 2.000000 seconds 00:17:43.426 00:17:43.426 Latency(us) 00:17:43.426 [2024-11-20T15:08:14.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.426 [2024-11-20T15:08:14.230Z] =================================================================================================================== 00:17:43.426 [2024-11-20T15:08:14.230Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:43.426 15:08:13 -- common/autotest_common.sh@955 -- # kill 83573 00:17:43.426 15:08:13 -- common/autotest_common.sh@960 -- # wait 83573 00:17:43.426 15:08:14 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:17:43.426 15:08:14 -- host/digest.sh@77 -- # local rw bs qd 00:17:43.426 15:08:14 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:43.426 15:08:14 -- host/digest.sh@80 -- # rw=randwrite 00:17:43.426 15:08:14 -- host/digest.sh@80 -- # bs=4096 00:17:43.426 15:08:14 -- host/digest.sh@80 -- # qd=128 00:17:43.426 15:08:14 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:43.426 15:08:14 -- host/digest.sh@82 -- # bperfpid=83621 00:17:43.426 15:08:14 -- host/digest.sh@83 -- # waitforlisten 83621 /var/tmp/bperf.sock 00:17:43.426 15:08:14 -- common/autotest_common.sh@829 -- # '[' -z 83621 ']' 00:17:43.426 15:08:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:43.426 15:08:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:43.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:43.426 15:08:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:43.426 15:08:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:43.426 15:08:14 -- common/autotest_common.sh@10 -- # set +x 00:17:43.426 [2024-11-20 15:08:14.159481] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:43.426 [2024-11-20 15:08:14.159565] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83621 ] 00:17:43.685 [2024-11-20 15:08:14.291008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.685 [2024-11-20 15:08:14.325979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.685 15:08:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:43.685 15:08:14 -- common/autotest_common.sh@862 -- # return 0 00:17:43.685 15:08:14 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:17:43.685 15:08:14 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:17:43.685 15:08:14 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:44.252 15:08:14 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:44.252 15:08:14 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:44.510 nvme0n1 00:17:44.510 15:08:15 -- host/digest.sh@91 -- # bperf_py perform_tests 00:17:44.510 15:08:15 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:44.510 Running I/O for 2 seconds... 00:17:47.041 00:17:47.041 Latency(us) 00:17:47.041 [2024-11-20T15:08:17.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.041 [2024-11-20T15:08:17.845Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:47.041 nvme0n1 : 2.00 15654.86 61.15 0.00 0.00 8168.58 7506.85 17277.67 00:17:47.041 [2024-11-20T15:08:17.845Z] =================================================================================================================== 00:17:47.042 [2024-11-20T15:08:17.846Z] Total : 15654.86 61.15 0.00 0.00 8168.58 7506.85 17277.67 00:17:47.042 0 00:17:47.042 15:08:17 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:17:47.042 15:08:17 -- host/digest.sh@92 -- # get_accel_stats 00:17:47.042 15:08:17 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:47.042 15:08:17 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:47.042 | select(.opcode=="crc32c") 00:17:47.042 | "\(.module_name) \(.executed)"' 00:17:47.042 15:08:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:47.042 15:08:17 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:17:47.042 15:08:17 -- host/digest.sh@93 -- # exp_module=software 00:17:47.042 15:08:17 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:17:47.042 15:08:17 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:47.042 15:08:17 -- host/digest.sh@97 -- # killprocess 83621 00:17:47.042 15:08:17 -- common/autotest_common.sh@936 -- # '[' -z 83621 ']' 00:17:47.042 15:08:17 -- common/autotest_common.sh@940 -- # kill -0 83621 00:17:47.042 15:08:17 -- common/autotest_common.sh@941 -- # uname 00:17:47.042 15:08:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:47.042 15:08:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83621 00:17:47.042 15:08:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:47.042 15:08:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:47.042 15:08:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83621' 00:17:47.042 killing process with pid 83621 00:17:47.042 Received shutdown signal, test time was about 2.000000 seconds 00:17:47.042 00:17:47.042 Latency(us) 00:17:47.042 [2024-11-20T15:08:17.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.042 [2024-11-20T15:08:17.846Z] =================================================================================================================== 00:17:47.042 [2024-11-20T15:08:17.846Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:47.042 15:08:17 -- common/autotest_common.sh@955 -- # kill 83621 00:17:47.042 15:08:17 -- common/autotest_common.sh@960 -- # wait 83621 00:17:47.042 15:08:17 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:17:47.042 15:08:17 -- host/digest.sh@77 -- # local rw bs qd 00:17:47.042 15:08:17 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:47.042 15:08:17 -- host/digest.sh@80 -- # rw=randwrite 00:17:47.042 15:08:17 -- host/digest.sh@80 -- # bs=131072 00:17:47.042 15:08:17 -- host/digest.sh@80 -- # qd=16 00:17:47.042 15:08:17 -- host/digest.sh@82 -- # bperfpid=83676 00:17:47.042 15:08:17 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:47.042 15:08:17 -- host/digest.sh@83 -- # waitforlisten 83676 /var/tmp/bperf.sock 00:17:47.042 15:08:17 -- common/autotest_common.sh@829 -- # '[' -z 83676 ']' 00:17:47.042 15:08:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:47.042 15:08:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:47.042 15:08:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:47.042 15:08:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.042 15:08:17 -- common/autotest_common.sh@10 -- # set +x 00:17:47.042 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:47.042 Zero copy mechanism will not be used. 00:17:47.042 [2024-11-20 15:08:17.833735] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:47.042 [2024-11-20 15:08:17.833813] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83676 ] 00:17:47.300 [2024-11-20 15:08:17.972132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.300 [2024-11-20 15:08:18.010963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.300 15:08:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.300 15:08:18 -- common/autotest_common.sh@862 -- # return 0 00:17:47.300 15:08:18 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:17:47.300 15:08:18 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:17:47.300 15:08:18 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:47.867 15:08:18 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:47.867 15:08:18 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:48.126 nvme0n1 00:17:48.126 15:08:18 -- host/digest.sh@91 -- # bperf_py perform_tests 00:17:48.126 15:08:18 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:48.385 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:48.385 Zero copy mechanism will not be used. 00:17:48.385 Running I/O for 2 seconds... 00:17:50.289 00:17:50.289 Latency(us) 00:17:50.289 [2024-11-20T15:08:21.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.289 [2024-11-20T15:08:21.093Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:50.289 nvme0n1 : 2.00 6140.24 767.53 0.00 0.00 2600.06 1980.97 9532.51 00:17:50.289 [2024-11-20T15:08:21.093Z] =================================================================================================================== 00:17:50.289 [2024-11-20T15:08:21.093Z] Total : 6140.24 767.53 0.00 0.00 2600.06 1980.97 9532.51 00:17:50.289 0 00:17:50.289 15:08:20 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:17:50.289 15:08:21 -- host/digest.sh@92 -- # get_accel_stats 00:17:50.289 15:08:21 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:50.289 15:08:21 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:50.289 15:08:21 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:50.289 | select(.opcode=="crc32c") 00:17:50.289 | "\(.module_name) \(.executed)"' 00:17:50.548 15:08:21 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:17:50.548 15:08:21 -- host/digest.sh@93 -- # exp_module=software 00:17:50.548 15:08:21 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:17:50.548 15:08:21 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:50.548 15:08:21 -- host/digest.sh@97 -- # killprocess 83676 00:17:50.548 15:08:21 -- common/autotest_common.sh@936 -- # '[' -z 83676 ']' 00:17:50.548 15:08:21 -- common/autotest_common.sh@940 -- # kill -0 83676 00:17:50.548 15:08:21 -- common/autotest_common.sh@941 -- # uname 00:17:50.548 15:08:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:50.548 15:08:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83676 00:17:50.548 killing process with pid 83676 00:17:50.548 Received shutdown signal, test time was about 2.000000 seconds 00:17:50.548 00:17:50.548 Latency(us) 00:17:50.548 [2024-11-20T15:08:21.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.548 [2024-11-20T15:08:21.352Z] =================================================================================================================== 00:17:50.548 [2024-11-20T15:08:21.352Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:50.548 15:08:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:50.548 15:08:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:50.548 15:08:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83676' 00:17:50.548 15:08:21 -- common/autotest_common.sh@955 -- # kill 83676 00:17:50.548 15:08:21 -- common/autotest_common.sh@960 -- # wait 83676 00:17:50.808 15:08:21 -- host/digest.sh@126 -- # killprocess 83484 00:17:50.808 15:08:21 -- common/autotest_common.sh@936 -- # '[' -z 83484 ']' 00:17:50.808 15:08:21 -- common/autotest_common.sh@940 -- # kill -0 83484 00:17:50.808 15:08:21 -- common/autotest_common.sh@941 -- # uname 00:17:50.808 15:08:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:50.808 15:08:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83484 00:17:50.808 killing process with pid 83484 00:17:50.808 15:08:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:50.808 15:08:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:50.808 15:08:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83484' 00:17:50.808 15:08:21 -- common/autotest_common.sh@955 -- # kill 83484 00:17:50.808 15:08:21 -- common/autotest_common.sh@960 -- # wait 83484 00:17:51.067 00:17:51.067 real 0m15.855s 00:17:51.067 user 0m31.528s 00:17:51.067 sys 0m4.298s 00:17:51.067 15:08:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:51.067 15:08:21 -- common/autotest_common.sh@10 -- # set +x 00:17:51.067 ************************************ 00:17:51.067 END TEST nvmf_digest_clean 00:17:51.067 ************************************ 00:17:51.067 15:08:21 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:17:51.067 15:08:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:51.067 15:08:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:51.067 15:08:21 -- common/autotest_common.sh@10 -- # set +x 00:17:51.067 ************************************ 00:17:51.067 START TEST nvmf_digest_error 00:17:51.067 ************************************ 00:17:51.067 15:08:21 -- common/autotest_common.sh@1114 -- # run_digest_error 00:17:51.067 15:08:21 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:17:51.067 15:08:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:51.067 15:08:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:51.067 15:08:21 -- common/autotest_common.sh@10 -- # set +x 00:17:51.067 15:08:21 -- nvmf/common.sh@469 -- # nvmfpid=83746 00:17:51.067 15:08:21 -- nvmf/common.sh@470 -- # waitforlisten 83746 00:17:51.067 15:08:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:51.067 15:08:21 -- common/autotest_common.sh@829 -- # '[' -z 83746 ']' 00:17:51.067 15:08:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.067 15:08:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.067 15:08:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.067 15:08:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.067 15:08:21 -- common/autotest_common.sh@10 -- # set +x 00:17:51.067 [2024-11-20 15:08:21.775073] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:51.067 [2024-11-20 15:08:21.775158] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.326 [2024-11-20 15:08:21.907803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.326 [2024-11-20 15:08:21.942038] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:51.326 [2024-11-20 15:08:21.942205] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.326 [2024-11-20 15:08:21.942222] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.326 [2024-11-20 15:08:21.942231] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.326 [2024-11-20 15:08:21.942256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.326 15:08:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:51.326 15:08:22 -- common/autotest_common.sh@862 -- # return 0 00:17:51.326 15:08:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:51.326 15:08:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:51.326 15:08:22 -- common/autotest_common.sh@10 -- # set +x 00:17:51.326 15:08:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.326 15:08:22 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:51.326 15:08:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.326 15:08:22 -- common/autotest_common.sh@10 -- # set +x 00:17:51.326 [2024-11-20 15:08:22.070653] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:51.326 15:08:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.326 15:08:22 -- host/digest.sh@104 -- # common_target_config 00:17:51.326 15:08:22 -- host/digest.sh@43 -- # rpc_cmd 00:17:51.326 15:08:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.326 15:08:22 -- common/autotest_common.sh@10 -- # set +x 00:17:51.585 null0 00:17:51.585 [2024-11-20 15:08:22.138231] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.585 [2024-11-20 15:08:22.162369] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.585 15:08:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.585 15:08:22 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:17:51.585 15:08:22 -- host/digest.sh@54 -- # local rw bs qd 00:17:51.585 15:08:22 -- host/digest.sh@56 -- # rw=randread 00:17:51.585 15:08:22 -- host/digest.sh@56 -- # bs=4096 00:17:51.585 15:08:22 -- host/digest.sh@56 -- # qd=128 00:17:51.585 15:08:22 -- host/digest.sh@58 -- # bperfpid=83775 00:17:51.585 15:08:22 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:51.585 15:08:22 -- host/digest.sh@60 -- # waitforlisten 83775 /var/tmp/bperf.sock 00:17:51.585 15:08:22 -- common/autotest_common.sh@829 -- # '[' -z 83775 ']' 00:17:51.585 15:08:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:51.585 15:08:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.585 15:08:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:51.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:51.585 15:08:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.585 15:08:22 -- common/autotest_common.sh@10 -- # set +x 00:17:51.585 [2024-11-20 15:08:22.221734] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:51.585 [2024-11-20 15:08:22.221837] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83775 ] 00:17:51.585 [2024-11-20 15:08:22.356527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.846 [2024-11-20 15:08:22.392683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.846 15:08:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:51.846 15:08:22 -- common/autotest_common.sh@862 -- # return 0 00:17:51.846 15:08:22 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:51.846 15:08:22 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:52.137 15:08:22 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:52.137 15:08:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.137 15:08:22 -- common/autotest_common.sh@10 -- # set +x 00:17:52.137 15:08:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.137 15:08:22 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:52.137 15:08:22 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:52.395 nvme0n1 00:17:52.395 15:08:23 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:52.395 15:08:23 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.395 15:08:23 -- common/autotest_common.sh@10 -- # set +x 00:17:52.395 15:08:23 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.395 15:08:23 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:52.395 15:08:23 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:52.655 Running I/O for 2 seconds... 00:17:52.655 [2024-11-20 15:08:23.267993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:52.655 [2024-11-20 15:08:23.268069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.655 [2024-11-20 15:08:23.268085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.655 [2024-11-20 15:08:23.287789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:52.655 [2024-11-20 15:08:23.287841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.655 [2024-11-20 15:08:23.287857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.655 [2024-11-20 15:08:23.307485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:52.655 [2024-11-20 15:08:23.307532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.655 [2024-11-20 15:08:23.307548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.655 [2024-11-20 15:08:23.326499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:52.655 [2024-11-20 15:08:23.326566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.655 [2024-11-20 15:08:23.326583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.655 [2024-11-20 15:08:23.344496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:52.655 [2024-11-20 15:08:23.344559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.655 [2024-11-20 15:08:23.344575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.655 [2024-11-20 15:08:23.362139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:52.655 [2024-11-20 15:08:23.362194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.655 [2024-11-20 15:08:23.362209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.655 [2024-11-20 15:08:23.379791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:52.655 [2024-11-20 15:08:23.379836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.655 [2024-11-20 15:08:23.379851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.655 [2024-11-20 15:08:23.397393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:52.655 [2024-11-20 15:08:23.397441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.655 [2024-11-20 15:08:23.397455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.655 [2024-11-20 15:08:23.414891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:52.655 [2024-11-20 15:08:23.414938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.655 [2024-11-20 15:08:23.414953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.655 [2024-11-20 15:08:23.432795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:52.655 [2024-11-20 15:08:23.432855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.655 [2024-11-20 15:08:23.432870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.655 [2024-11-20 15:08:23.450410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:52.655 [2024-11-20 15:08:23.450468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.655 [2024-11-20 15:08:23.450484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.915 [2024-11-20 15:08:23.468402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:52.915 [2024-11-20 15:08:23.468459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.915 [2024-11-20 15:08:23.468475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.915 [2024-11-20 15:08:23.486083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:52.915 [2024-11-20 15:08:23.486130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.915 [2024-11-20 15:08:23.486145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.915 [2024-11-20 15:08:23.503687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:52.915 [2024-11-20 15:08:23.503731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.915 [2024-11-20 15:08:23.503745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.915 [2024-11-20 15:08:23.521251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:52.915 [2024-11-20 15:08:23.521298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.915 [2024-11-20 15:08:23.521314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.915 [2024-11-20 15:08:23.538781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:52.915 [2024-11-20 15:08:23.538830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.915 [2024-11-20 15:08:23.538845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.915 [2024-11-20 15:08:23.556349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:52.915 [2024-11-20 15:08:23.556403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.915 [2024-11-20 15:08:23.556418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.915 [2024-11-20 15:08:23.573908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:52.916 [2024-11-20 15:08:23.573963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.916 [2024-11-20 15:08:23.573978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.916 [2024-11-20 15:08:23.591584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:52.916 [2024-11-20 15:08:23.591659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.916 [2024-11-20 15:08:23.591689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.916 [2024-11-20 15:08:23.609207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:52.916 [2024-11-20 15:08:23.609265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.916 [2024-11-20 15:08:23.609281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.916 [2024-11-20 15:08:23.626889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:52.916 [2024-11-20 15:08:23.626958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.916 [2024-11-20 15:08:23.626974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.916 [2024-11-20 15:08:23.644992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:52.916 [2024-11-20 15:08:23.645062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.916 [2024-11-20 15:08:23.645078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.916 [2024-11-20 15:08:23.662880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:52.916 [2024-11-20 15:08:23.662946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.916 [2024-11-20 15:08:23.662961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.916 [2024-11-20 15:08:23.680458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:52.916 [2024-11-20 15:08:23.680506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.916 [2024-11-20 15:08:23.680521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.916 [2024-11-20 15:08:23.698014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:52.916 [2024-11-20 15:08:23.698056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.916 [2024-11-20 15:08:23.698071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:52.916 [2024-11-20 15:08:23.715500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:52.916 [2024-11-20 15:08:23.715542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:52.916 [2024-11-20 15:08:23.715558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.176 [2024-11-20 15:08:23.732974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.176 [2024-11-20 15:08:23.733016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.176 [2024-11-20 15:08:23.733030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.176 [2024-11-20 15:08:23.750525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.176 [2024-11-20 15:08:23.750571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.176 [2024-11-20 15:08:23.750586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.176 [2024-11-20 15:08:23.768226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.176 [2024-11-20 15:08:23.768271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.176 [2024-11-20 15:08:23.768286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.176 [2024-11-20 15:08:23.785882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.176 [2024-11-20 15:08:23.785955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.176 [2024-11-20 15:08:23.785970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.176 [2024-11-20 15:08:23.803560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.176 [2024-11-20 15:08:23.803605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.176 [2024-11-20 15:08:23.803620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.176 [2024-11-20 15:08:23.821229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.176 [2024-11-20 15:08:23.821279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.176 [2024-11-20 15:08:23.821294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.176 [2024-11-20 15:08:23.838827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.176 [2024-11-20 15:08:23.838869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.176 [2024-11-20 15:08:23.838884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.176 [2024-11-20 15:08:23.856485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.176 [2024-11-20 15:08:23.856532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.176 [2024-11-20 15:08:23.856548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.176 [2024-11-20 15:08:23.874131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.176 [2024-11-20 15:08:23.874182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.176 [2024-11-20 15:08:23.874198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.176 [2024-11-20 15:08:23.892775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.176 [2024-11-20 15:08:23.892878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.176 [2024-11-20 15:08:23.892905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.176 [2024-11-20 15:08:23.910697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.176 [2024-11-20 15:08:23.910753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.176 [2024-11-20 15:08:23.910769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.176 [2024-11-20 15:08:23.928076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.176 [2024-11-20 15:08:23.928120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.176 [2024-11-20 15:08:23.928134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.176 [2024-11-20 15:08:23.945400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.176 [2024-11-20 15:08:23.945440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.176 [2024-11-20 15:08:23.945454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.176 [2024-11-20 15:08:23.962845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.176 [2024-11-20 15:08:23.962890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.176 [2024-11-20 15:08:23.962905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.435 [2024-11-20 15:08:23.980310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.435 [2024-11-20 15:08:23.980358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.435 [2024-11-20 15:08:23.980373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.435 [2024-11-20 15:08:23.997764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.435 [2024-11-20 15:08:23.997806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.435 [2024-11-20 15:08:23.997821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.435 [2024-11-20 15:08:24.015120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.435 [2024-11-20 15:08:24.015162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.435 [2024-11-20 15:08:24.015177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.435 [2024-11-20 15:08:24.032602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.435 [2024-11-20 15:08:24.032670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.435 [2024-11-20 15:08:24.032685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.435 [2024-11-20 15:08:24.050079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.435 [2024-11-20 15:08:24.050127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.435 [2024-11-20 15:08:24.050143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.435 [2024-11-20 15:08:24.067483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.435 [2024-11-20 15:08:24.067529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.435 [2024-11-20 15:08:24.067544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.435 [2024-11-20 15:08:24.084894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.435 [2024-11-20 15:08:24.084939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.435 [2024-11-20 15:08:24.084954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.435 [2024-11-20 15:08:24.102378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.435 [2024-11-20 15:08:24.102420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.435 [2024-11-20 15:08:24.102436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.435 [2024-11-20 15:08:24.120011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.435 [2024-11-20 15:08:24.120059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.435 [2024-11-20 15:08:24.120074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.435 [2024-11-20 15:08:24.137536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.435 [2024-11-20 15:08:24.137586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.435 [2024-11-20 15:08:24.137600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.435 [2024-11-20 15:08:24.155020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.435 [2024-11-20 15:08:24.155063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.436 [2024-11-20 15:08:24.155077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.436 [2024-11-20 15:08:24.172669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.436 [2024-11-20 15:08:24.172710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.436 [2024-11-20 15:08:24.172726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.436 [2024-11-20 15:08:24.190390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.436 [2024-11-20 15:08:24.190447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.436 [2024-11-20 15:08:24.190462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.436 [2024-11-20 15:08:24.208570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.436 [2024-11-20 15:08:24.208627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.436 [2024-11-20 15:08:24.208659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.436 [2024-11-20 15:08:24.226250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.436 [2024-11-20 15:08:24.226293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.436 [2024-11-20 15:08:24.226308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.695 [2024-11-20 15:08:24.243904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.695 [2024-11-20 15:08:24.243953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.695 [2024-11-20 15:08:24.243968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.695 [2024-11-20 15:08:24.261441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.695 [2024-11-20 15:08:24.261489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.695 [2024-11-20 15:08:24.261505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.695 [2024-11-20 15:08:24.278933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.695 [2024-11-20 15:08:24.278983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.695 [2024-11-20 15:08:24.278997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.695 [2024-11-20 15:08:24.296486] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.695 [2024-11-20 15:08:24.296537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.695 [2024-11-20 15:08:24.296553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.695 [2024-11-20 15:08:24.314094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.695 [2024-11-20 15:08:24.314145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.695 [2024-11-20 15:08:24.314159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.695 [2024-11-20 15:08:24.331742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.695 [2024-11-20 15:08:24.331793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.695 [2024-11-20 15:08:24.331808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.695 [2024-11-20 15:08:24.349193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.695 [2024-11-20 15:08:24.349244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.695 [2024-11-20 15:08:24.349259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.695 [2024-11-20 15:08:24.366808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.695 [2024-11-20 15:08:24.366863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.695 [2024-11-20 15:08:24.366878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.695 [2024-11-20 15:08:24.391848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.695 [2024-11-20 15:08:24.391902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.695 [2024-11-20 15:08:24.391917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.695 [2024-11-20 15:08:24.409254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.695 [2024-11-20 15:08:24.409298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.695 [2024-11-20 15:08:24.409313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.695 [2024-11-20 15:08:24.426705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.696 [2024-11-20 15:08:24.426749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 15:08:24.426764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 15:08:24.444066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.696 [2024-11-20 15:08:24.444107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 15:08:24.444121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 15:08:24.461452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.696 [2024-11-20 15:08:24.461498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 15:08:24.461512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 15:08:24.479042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.696 [2024-11-20 15:08:24.479088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 15:08:24.479104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.696 [2024-11-20 15:08:24.496630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.696 [2024-11-20 15:08:24.496692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.696 [2024-11-20 15:08:24.496707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.955 [2024-11-20 15:08:24.514132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.955 [2024-11-20 15:08:24.514176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.955 [2024-11-20 15:08:24.514191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.955 [2024-11-20 15:08:24.531626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.955 [2024-11-20 15:08:24.531700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.955 [2024-11-20 15:08:24.531719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.955 [2024-11-20 15:08:24.549126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.955 [2024-11-20 15:08:24.549175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.955 [2024-11-20 15:08:24.549191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.955 [2024-11-20 15:08:24.566620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.955 [2024-11-20 15:08:24.566683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.955 [2024-11-20 15:08:24.566699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.955 [2024-11-20 15:08:24.584302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.955 [2024-11-20 15:08:24.584360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:10916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.955 [2024-11-20 15:08:24.584376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.955 [2024-11-20 15:08:24.602076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.956 [2024-11-20 15:08:24.602147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.956 [2024-11-20 15:08:24.602163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.956 [2024-11-20 15:08:24.619903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.956 [2024-11-20 15:08:24.619956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.956 [2024-11-20 15:08:24.619971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.956 [2024-11-20 15:08:24.637543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.956 [2024-11-20 15:08:24.637593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.956 [2024-11-20 15:08:24.637608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.956 [2024-11-20 15:08:24.654906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.956 [2024-11-20 15:08:24.654951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.956 [2024-11-20 15:08:24.654967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.956 [2024-11-20 15:08:24.672320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.956 [2024-11-20 15:08:24.672363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.956 [2024-11-20 15:08:24.672377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.956 [2024-11-20 15:08:24.689784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.956 [2024-11-20 15:08:24.689820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.956 [2024-11-20 15:08:24.689833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.956 [2024-11-20 15:08:24.707057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.956 [2024-11-20 15:08:24.707092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.956 [2024-11-20 15:08:24.707105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.956 [2024-11-20 15:08:24.724423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.956 [2024-11-20 15:08:24.724465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.956 [2024-11-20 15:08:24.724479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.956 [2024-11-20 15:08:24.741904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:53.956 [2024-11-20 15:08:24.741940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.956 [2024-11-20 15:08:24.741953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.215 [2024-11-20 15:08:24.759322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.215 [2024-11-20 15:08:24.759361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.215 [2024-11-20 15:08:24.759375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.215 [2024-11-20 15:08:24.776751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.215 [2024-11-20 15:08:24.776791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.215 [2024-11-20 15:08:24.776804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.215 [2024-11-20 15:08:24.794126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.215 [2024-11-20 15:08:24.794172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.215 [2024-11-20 15:08:24.794187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.215 [2024-11-20 15:08:24.811590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.215 [2024-11-20 15:08:24.811637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.215 [2024-11-20 15:08:24.811663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.215 [2024-11-20 15:08:24.829070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.215 [2024-11-20 15:08:24.829113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.215 [2024-11-20 15:08:24.829127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.215 [2024-11-20 15:08:24.846565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.215 [2024-11-20 15:08:24.846605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.215 [2024-11-20 15:08:24.846619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.215 [2024-11-20 15:08:24.864030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.215 [2024-11-20 15:08:24.864068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.215 [2024-11-20 15:08:24.864081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.215 [2024-11-20 15:08:24.881287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.215 [2024-11-20 15:08:24.881328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.215 [2024-11-20 15:08:24.881342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.215 [2024-11-20 15:08:24.898925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.215 [2024-11-20 15:08:24.898968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.215 [2024-11-20 15:08:24.898981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.215 [2024-11-20 15:08:24.916373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.215 [2024-11-20 15:08:24.916426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.215 [2024-11-20 15:08:24.916442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.215 [2024-11-20 15:08:24.934055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.215 [2024-11-20 15:08:24.934111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.215 [2024-11-20 15:08:24.934129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.215 [2024-11-20 15:08:24.951467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.215 [2024-11-20 15:08:24.951510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.215 [2024-11-20 15:08:24.951524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.215 [2024-11-20 15:08:24.968871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.216 [2024-11-20 15:08:24.968914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.216 [2024-11-20 15:08:24.968930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.216 [2024-11-20 15:08:24.986290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.216 [2024-11-20 15:08:24.986343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.216 [2024-11-20 15:08:24.986357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.216 [2024-11-20 15:08:25.003620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.216 [2024-11-20 15:08:25.003670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.216 [2024-11-20 15:08:25.003686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.475 [2024-11-20 15:08:25.021209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.475 [2024-11-20 15:08:25.021255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.475 [2024-11-20 15:08:25.021269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.475 [2024-11-20 15:08:25.038831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.475 [2024-11-20 15:08:25.038875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.475 [2024-11-20 15:08:25.038891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.475 [2024-11-20 15:08:25.056149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.475 [2024-11-20 15:08:25.056190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.475 [2024-11-20 15:08:25.056204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.475 [2024-11-20 15:08:25.073435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.475 [2024-11-20 15:08:25.073475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.475 [2024-11-20 15:08:25.073489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.475 [2024-11-20 15:08:25.090821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.475 [2024-11-20 15:08:25.090862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.475 [2024-11-20 15:08:25.090877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.475 [2024-11-20 15:08:25.108201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.475 [2024-11-20 15:08:25.108243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.475 [2024-11-20 15:08:25.108257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.475 [2024-11-20 15:08:25.125534] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.475 [2024-11-20 15:08:25.125573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.475 [2024-11-20 15:08:25.125587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.475 [2024-11-20 15:08:25.142940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.475 [2024-11-20 15:08:25.142983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.475 [2024-11-20 15:08:25.142997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.475 [2024-11-20 15:08:25.160299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.475 [2024-11-20 15:08:25.160340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.475 [2024-11-20 15:08:25.160354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.475 [2024-11-20 15:08:25.177610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.475 [2024-11-20 15:08:25.177662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.475 [2024-11-20 15:08:25.177678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.475 [2024-11-20 15:08:25.194938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.475 [2024-11-20 15:08:25.194991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.475 [2024-11-20 15:08:25.195004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.475 [2024-11-20 15:08:25.212246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.475 [2024-11-20 15:08:25.212283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.475 [2024-11-20 15:08:25.212296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.475 [2024-11-20 15:08:25.229563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f6e410) 00:17:54.475 [2024-11-20 15:08:25.229605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.475 [2024-11-20 15:08:25.229619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.475 00:17:54.475 Latency(us) 00:17:54.475 [2024-11-20T15:08:25.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.475 [2024-11-20T15:08:25.279Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:54.475 nvme0n1 : 2.01 14307.02 55.89 0.00 0.00 8941.00 8102.63 33840.41 00:17:54.475 [2024-11-20T15:08:25.279Z] =================================================================================================================== 00:17:54.475 [2024-11-20T15:08:25.279Z] Total : 14307.02 55.89 0.00 0.00 8941.00 8102.63 33840.41 00:17:54.475 0 00:17:54.475 15:08:25 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:54.475 15:08:25 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:54.475 15:08:25 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:54.475 15:08:25 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:54.475 | .driver_specific 00:17:54.475 | .nvme_error 00:17:54.475 | .status_code 00:17:54.475 | .command_transient_transport_error' 00:17:55.043 15:08:25 -- host/digest.sh@71 -- # (( 112 > 0 )) 00:17:55.043 15:08:25 -- host/digest.sh@73 -- # killprocess 83775 00:17:55.043 15:08:25 -- common/autotest_common.sh@936 -- # '[' -z 83775 ']' 00:17:55.043 15:08:25 -- common/autotest_common.sh@940 -- # kill -0 83775 00:17:55.043 15:08:25 -- common/autotest_common.sh@941 -- # uname 00:17:55.043 15:08:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:55.043 15:08:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83775 00:17:55.043 15:08:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:55.043 killing process with pid 83775 00:17:55.043 15:08:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:55.043 15:08:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83775' 00:17:55.043 Received shutdown signal, test time was about 2.000000 seconds 00:17:55.043 00:17:55.043 Latency(us) 00:17:55.043 [2024-11-20T15:08:25.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.043 [2024-11-20T15:08:25.847Z] =================================================================================================================== 00:17:55.043 [2024-11-20T15:08:25.847Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:55.043 15:08:25 -- common/autotest_common.sh@955 -- # kill 83775 00:17:55.043 15:08:25 -- common/autotest_common.sh@960 -- # wait 83775 00:17:55.043 15:08:25 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:17:55.043 15:08:25 -- host/digest.sh@54 -- # local rw bs qd 00:17:55.043 15:08:25 -- host/digest.sh@56 -- # rw=randread 00:17:55.043 15:08:25 -- host/digest.sh@56 -- # bs=131072 00:17:55.043 15:08:25 -- host/digest.sh@56 -- # qd=16 00:17:55.043 15:08:25 -- host/digest.sh@58 -- # bperfpid=83823 00:17:55.043 15:08:25 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:55.043 15:08:25 -- host/digest.sh@60 -- # waitforlisten 83823 /var/tmp/bperf.sock 00:17:55.043 15:08:25 -- common/autotest_common.sh@829 -- # '[' -z 83823 ']' 00:17:55.043 15:08:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:55.043 15:08:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:55.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:55.043 15:08:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:55.043 15:08:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:55.043 15:08:25 -- common/autotest_common.sh@10 -- # set +x 00:17:55.043 [2024-11-20 15:08:25.773062] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:55.043 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:55.043 Zero copy mechanism will not be used. 00:17:55.043 [2024-11-20 15:08:25.773172] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83823 ] 00:17:55.301 [2024-11-20 15:08:25.918257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.301 [2024-11-20 15:08:25.955597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.238 15:08:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:56.238 15:08:26 -- common/autotest_common.sh@862 -- # return 0 00:17:56.238 15:08:26 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:56.238 15:08:26 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:56.238 15:08:27 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:56.238 15:08:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.238 15:08:27 -- common/autotest_common.sh@10 -- # set +x 00:17:56.238 15:08:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.238 15:08:27 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:56.238 15:08:27 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:56.806 nvme0n1 00:17:56.806 15:08:27 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:56.806 15:08:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.806 15:08:27 -- common/autotest_common.sh@10 -- # set +x 00:17:56.806 15:08:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.806 15:08:27 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:56.806 15:08:27 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:56.806 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:56.806 Zero copy mechanism will not be used. 00:17:56.806 Running I/O for 2 seconds... 00:17:56.806 [2024-11-20 15:08:27.462153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.806 [2024-11-20 15:08:27.462207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.806 [2024-11-20 15:08:27.462224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.806 [2024-11-20 15:08:27.466706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.806 [2024-11-20 15:08:27.466742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.806 [2024-11-20 15:08:27.466756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.806 [2024-11-20 15:08:27.471215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.806 [2024-11-20 15:08:27.471252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.806 [2024-11-20 15:08:27.471266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.806 [2024-11-20 15:08:27.475791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.806 [2024-11-20 15:08:27.475828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.806 [2024-11-20 15:08:27.475842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.806 [2024-11-20 15:08:27.480302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.806 [2024-11-20 15:08:27.480340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.806 [2024-11-20 15:08:27.480354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.806 [2024-11-20 15:08:27.484752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.806 [2024-11-20 15:08:27.484790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.806 [2024-11-20 15:08:27.484804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.806 [2024-11-20 15:08:27.489301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.806 [2024-11-20 15:08:27.489339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.806 [2024-11-20 15:08:27.489354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.806 [2024-11-20 15:08:27.493744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.806 [2024-11-20 15:08:27.493780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.806 [2024-11-20 15:08:27.493794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.806 [2024-11-20 15:08:27.498259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.806 [2024-11-20 15:08:27.498297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.806 [2024-11-20 15:08:27.498311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.806 [2024-11-20 15:08:27.502789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.806 [2024-11-20 15:08:27.502826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.806 [2024-11-20 15:08:27.502840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.806 [2024-11-20 15:08:27.507240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.806 [2024-11-20 15:08:27.507276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.806 [2024-11-20 15:08:27.507290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.806 [2024-11-20 15:08:27.511775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.806 [2024-11-20 15:08:27.511812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.806 [2024-11-20 15:08:27.511826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.806 [2024-11-20 15:08:27.516274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.806 [2024-11-20 15:08:27.516312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.806 [2024-11-20 15:08:27.516326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.806 [2024-11-20 15:08:27.520759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.806 [2024-11-20 15:08:27.520795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.806 [2024-11-20 15:08:27.520809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.806 [2024-11-20 15:08:27.525329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.806 [2024-11-20 15:08:27.525367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.806 [2024-11-20 15:08:27.525381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.806 [2024-11-20 15:08:27.529939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.806 [2024-11-20 15:08:27.529995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.807 [2024-11-20 15:08:27.530012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.807 [2024-11-20 15:08:27.534391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.807 [2024-11-20 15:08:27.534429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.807 [2024-11-20 15:08:27.534443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.807 [2024-11-20 15:08:27.538871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.807 [2024-11-20 15:08:27.538908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.807 [2024-11-20 15:08:27.538923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.807 [2024-11-20 15:08:27.543342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.807 [2024-11-20 15:08:27.543379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.807 [2024-11-20 15:08:27.543393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.807 [2024-11-20 15:08:27.547948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.807 [2024-11-20 15:08:27.547988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.807 [2024-11-20 15:08:27.548002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.807 [2024-11-20 15:08:27.552425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.807 [2024-11-20 15:08:27.552464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.807 [2024-11-20 15:08:27.552478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.807 [2024-11-20 15:08:27.556920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.807 [2024-11-20 15:08:27.556960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.807 [2024-11-20 15:08:27.556974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.807 [2024-11-20 15:08:27.561533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.807 [2024-11-20 15:08:27.561582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.807 [2024-11-20 15:08:27.561599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.807 [2024-11-20 15:08:27.566106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.807 [2024-11-20 15:08:27.566146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.807 [2024-11-20 15:08:27.566160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.807 [2024-11-20 15:08:27.570698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.807 [2024-11-20 15:08:27.570736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.807 [2024-11-20 15:08:27.570750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.807 [2024-11-20 15:08:27.575124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.807 [2024-11-20 15:08:27.575162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.807 [2024-11-20 15:08:27.575176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.807 [2024-11-20 15:08:27.579658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.807 [2024-11-20 15:08:27.579694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.807 [2024-11-20 15:08:27.579708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.807 [2024-11-20 15:08:27.584199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.807 [2024-11-20 15:08:27.584236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.807 [2024-11-20 15:08:27.584250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.807 [2024-11-20 15:08:27.588788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.807 [2024-11-20 15:08:27.588825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.807 [2024-11-20 15:08:27.588839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:56.807 [2024-11-20 15:08:27.593349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.807 [2024-11-20 15:08:27.593387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.807 [2024-11-20 15:08:27.593401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:56.807 [2024-11-20 15:08:27.597911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.807 [2024-11-20 15:08:27.597952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.807 [2024-11-20 15:08:27.597966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:56.807 [2024-11-20 15:08:27.602399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.807 [2024-11-20 15:08:27.602438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.807 [2024-11-20 15:08:27.602451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:56.807 [2024-11-20 15:08:27.606969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:56.807 [2024-11-20 15:08:27.607008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.807 [2024-11-20 15:08:27.607021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.067 [2024-11-20 15:08:27.611631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.067 [2024-11-20 15:08:27.611698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.067 [2024-11-20 15:08:27.611714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.067 [2024-11-20 15:08:27.616203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.067 [2024-11-20 15:08:27.616257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.067 [2024-11-20 15:08:27.616271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.067 [2024-11-20 15:08:27.620830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.067 [2024-11-20 15:08:27.620868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.067 [2024-11-20 15:08:27.620882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.067 [2024-11-20 15:08:27.625294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.067 [2024-11-20 15:08:27.625332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.067 [2024-11-20 15:08:27.625346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.067 [2024-11-20 15:08:27.629821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.067 [2024-11-20 15:08:27.629858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.067 [2024-11-20 15:08:27.629873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.067 [2024-11-20 15:08:27.634320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.067 [2024-11-20 15:08:27.634361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.067 [2024-11-20 15:08:27.634375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.067 [2024-11-20 15:08:27.638826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.067 [2024-11-20 15:08:27.638869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.067 [2024-11-20 15:08:27.638883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.067 [2024-11-20 15:08:27.643332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.067 [2024-11-20 15:08:27.643369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.067 [2024-11-20 15:08:27.643383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.067 [2024-11-20 15:08:27.647896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.067 [2024-11-20 15:08:27.647934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.068 [2024-11-20 15:08:27.647948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.068 [2024-11-20 15:08:27.652439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.068 [2024-11-20 15:08:27.652476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.068 [2024-11-20 15:08:27.652490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.068 [2024-11-20 15:08:27.656892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.068 [2024-11-20 15:08:27.656930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.068 [2024-11-20 15:08:27.656943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.068 [2024-11-20 15:08:27.661479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.068 [2024-11-20 15:08:27.661519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.068 [2024-11-20 15:08:27.661533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.068 [2024-11-20 15:08:27.665959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.068 [2024-11-20 15:08:27.665997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.068 [2024-11-20 15:08:27.666011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.068 [2024-11-20 15:08:27.670497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.068 [2024-11-20 15:08:27.670537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.068 [2024-11-20 15:08:27.670551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.068 [2024-11-20 15:08:27.675015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.068 [2024-11-20 15:08:27.675052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.068 [2024-11-20 15:08:27.675066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.068 [2024-11-20 15:08:27.679501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.068 [2024-11-20 15:08:27.679538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.068 [2024-11-20 15:08:27.679552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.068 [2024-11-20 15:08:27.683982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.068 [2024-11-20 15:08:27.684020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.068 [2024-11-20 15:08:27.684033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.068 [2024-11-20 15:08:27.688474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.068 [2024-11-20 15:08:27.688511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.068 [2024-11-20 15:08:27.688526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.068 [2024-11-20 15:08:27.693025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.068 [2024-11-20 15:08:27.693063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.068 [2024-11-20 15:08:27.693077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.068 [2024-11-20 15:08:27.697578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.068 [2024-11-20 15:08:27.697616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.068 [2024-11-20 15:08:27.697631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.068 [2024-11-20 15:08:27.702085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.068 [2024-11-20 15:08:27.702124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.068 [2024-11-20 15:08:27.702138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.068 [2024-11-20 15:08:27.706571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.068 [2024-11-20 15:08:27.706608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.068 [2024-11-20 15:08:27.706622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.068 [2024-11-20 15:08:27.711099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.068 [2024-11-20 15:08:27.711137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.068 [2024-11-20 15:08:27.711151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.068 [2024-11-20 15:08:27.715606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.068 [2024-11-20 15:08:27.715656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.068 [2024-11-20 15:08:27.715672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.068 [2024-11-20 15:08:27.720177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.068 [2024-11-20 15:08:27.720218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.068 [2024-11-20 15:08:27.720232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.068 [2024-11-20 15:08:27.724703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.068 [2024-11-20 15:08:27.724748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.068 [2024-11-20 15:08:27.724762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.068 [2024-11-20 15:08:27.729172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.068 [2024-11-20 15:08:27.729209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.068 [2024-11-20 15:08:27.729224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.068 [2024-11-20 15:08:27.733683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.068 [2024-11-20 15:08:27.733720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.068 [2024-11-20 15:08:27.733733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.068 [2024-11-20 15:08:27.738261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.068 [2024-11-20 15:08:27.738302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.068 [2024-11-20 15:08:27.738316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.068 [2024-11-20 15:08:27.742804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.068 [2024-11-20 15:08:27.742841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.068 [2024-11-20 15:08:27.742855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.068 [2024-11-20 15:08:27.747329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.068 [2024-11-20 15:08:27.747367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.068 [2024-11-20 15:08:27.747381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.069 [2024-11-20 15:08:27.751868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.069 [2024-11-20 15:08:27.751906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.069 [2024-11-20 15:08:27.751919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.069 [2024-11-20 15:08:27.756456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.069 [2024-11-20 15:08:27.756494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.069 [2024-11-20 15:08:27.756509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.069 [2024-11-20 15:08:27.761004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.069 [2024-11-20 15:08:27.761042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.069 [2024-11-20 15:08:27.761056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.069 [2024-11-20 15:08:27.765675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.069 [2024-11-20 15:08:27.765711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.069 [2024-11-20 15:08:27.765725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.069 [2024-11-20 15:08:27.770186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.069 [2024-11-20 15:08:27.770224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.069 [2024-11-20 15:08:27.770238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.069 [2024-11-20 15:08:27.774756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.069 [2024-11-20 15:08:27.774793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.069 [2024-11-20 15:08:27.774806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.069 [2024-11-20 15:08:27.779275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.069 [2024-11-20 15:08:27.779312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.069 [2024-11-20 15:08:27.779326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.069 [2024-11-20 15:08:27.783791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.069 [2024-11-20 15:08:27.783828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.069 [2024-11-20 15:08:27.783842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.069 [2024-11-20 15:08:27.788360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.069 [2024-11-20 15:08:27.788400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.069 [2024-11-20 15:08:27.788415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.069 [2024-11-20 15:08:27.792922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.069 [2024-11-20 15:08:27.792964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.069 [2024-11-20 15:08:27.792978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.069 [2024-11-20 15:08:27.797526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.069 [2024-11-20 15:08:27.797563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.069 [2024-11-20 15:08:27.797577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.069 [2024-11-20 15:08:27.802119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.069 [2024-11-20 15:08:27.802174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.069 [2024-11-20 15:08:27.802188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.069 [2024-11-20 15:08:27.806723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.069 [2024-11-20 15:08:27.806779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.069 [2024-11-20 15:08:27.806793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.069 [2024-11-20 15:08:27.811320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.069 [2024-11-20 15:08:27.811369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.069 [2024-11-20 15:08:27.811383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.069 [2024-11-20 15:08:27.815982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.069 [2024-11-20 15:08:27.816021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.069 [2024-11-20 15:08:27.816035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.069 [2024-11-20 15:08:27.820511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.069 [2024-11-20 15:08:27.820549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.069 [2024-11-20 15:08:27.820562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.069 [2024-11-20 15:08:27.825044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.069 [2024-11-20 15:08:27.825082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.069 [2024-11-20 15:08:27.825096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.069 [2024-11-20 15:08:27.829567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.069 [2024-11-20 15:08:27.829605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.069 [2024-11-20 15:08:27.829619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.069 [2024-11-20 15:08:27.834099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.069 [2024-11-20 15:08:27.834137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.069 [2024-11-20 15:08:27.834150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.069 [2024-11-20 15:08:27.838582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.069 [2024-11-20 15:08:27.838619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.069 [2024-11-20 15:08:27.838633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.069 [2024-11-20 15:08:27.843100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.069 [2024-11-20 15:08:27.843136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.069 [2024-11-20 15:08:27.843150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.069 [2024-11-20 15:08:27.847718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.069 [2024-11-20 15:08:27.847754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.069 [2024-11-20 15:08:27.847768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.070 [2024-11-20 15:08:27.852219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.070 [2024-11-20 15:08:27.852257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.070 [2024-11-20 15:08:27.852271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.070 [2024-11-20 15:08:27.856772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.070 [2024-11-20 15:08:27.856809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.070 [2024-11-20 15:08:27.856824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.070 [2024-11-20 15:08:27.861184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.070 [2024-11-20 15:08:27.861221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.070 [2024-11-20 15:08:27.861235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.070 [2024-11-20 15:08:27.865747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.070 [2024-11-20 15:08:27.865785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.070 [2024-11-20 15:08:27.865799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.329 [2024-11-20 15:08:27.870268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.329 [2024-11-20 15:08:27.870308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.329 [2024-11-20 15:08:27.870322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.329 [2024-11-20 15:08:27.874864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.329 [2024-11-20 15:08:27.874902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.329 [2024-11-20 15:08:27.874916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.329 [2024-11-20 15:08:27.879440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.329 [2024-11-20 15:08:27.879479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.329 [2024-11-20 15:08:27.879494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.329 [2024-11-20 15:08:27.883991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.329 [2024-11-20 15:08:27.884027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.329 [2024-11-20 15:08:27.884041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.329 [2024-11-20 15:08:27.888506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.329 [2024-11-20 15:08:27.888545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.329 [2024-11-20 15:08:27.888559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.329 [2024-11-20 15:08:27.893105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.329 [2024-11-20 15:08:27.893143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.329 [2024-11-20 15:08:27.893157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.329 [2024-11-20 15:08:27.897664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.329 [2024-11-20 15:08:27.897703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.329 [2024-11-20 15:08:27.897717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.329 [2024-11-20 15:08:27.902196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.329 [2024-11-20 15:08:27.902234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.329 [2024-11-20 15:08:27.902248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.329 [2024-11-20 15:08:27.906773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.329 [2024-11-20 15:08:27.906811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.329 [2024-11-20 15:08:27.906825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.329 [2024-11-20 15:08:27.911223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.329 [2024-11-20 15:08:27.911263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.329 [2024-11-20 15:08:27.911277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.329 [2024-11-20 15:08:27.915848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.329 [2024-11-20 15:08:27.915889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.329 [2024-11-20 15:08:27.915902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.329 [2024-11-20 15:08:27.920361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.329 [2024-11-20 15:08:27.920403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.329 [2024-11-20 15:08:27.920416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.329 [2024-11-20 15:08:27.924847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.329 [2024-11-20 15:08:27.924884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.329 [2024-11-20 15:08:27.924898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.329 [2024-11-20 15:08:27.929329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.329 [2024-11-20 15:08:27.929366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.329 [2024-11-20 15:08:27.929380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.329 [2024-11-20 15:08:27.933886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.329 [2024-11-20 15:08:27.933923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.329 [2024-11-20 15:08:27.933936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.329 [2024-11-20 15:08:27.938453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.329 [2024-11-20 15:08:27.938489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.329 [2024-11-20 15:08:27.938503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.329 [2024-11-20 15:08:27.942901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.329 [2024-11-20 15:08:27.942937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.329 [2024-11-20 15:08:27.942952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.329 [2024-11-20 15:08:27.947378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.329 [2024-11-20 15:08:27.947414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.329 [2024-11-20 15:08:27.947428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.329 [2024-11-20 15:08:27.951860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.329 [2024-11-20 15:08:27.951897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.329 [2024-11-20 15:08:27.951910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.329 [2024-11-20 15:08:27.956365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.329 [2024-11-20 15:08:27.956401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.329 [2024-11-20 15:08:27.956414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.329 [2024-11-20 15:08:27.960906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.329 [2024-11-20 15:08:27.960942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.329 [2024-11-20 15:08:27.960956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.329 [2024-11-20 15:08:27.965464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:27.965504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:27.965518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:27.970025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:27.970064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:27.970078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:27.974583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:27.974626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:27.974656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:27.979906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:27.979965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:27.979988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:27.984566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:27.984610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:27.984625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:27.989217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:27.989262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:27.989277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:27.993852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:27.993889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:27.993903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:27.998463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:27.998502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:27.998516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.002972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.003010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.003023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.007540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.007577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.007591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.012106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.012144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.012157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.016661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.016698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.016711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.021143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.021180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.021193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.025609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.025661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.025677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.030106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.030149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.030164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.034684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.034722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.034736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.039218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.039255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.039270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.043666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.043702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.043716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.048179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.048217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.048230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.052782] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.052819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.052832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.057262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.057310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.057324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.061887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.061924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.061939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.066361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.066398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.066411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.070916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.070955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.070970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.075505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.075542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.075556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.079982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.080019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.080033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.084402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.084440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.084454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.088892] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.088928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.088943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.093456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.093495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.093509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.097936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.097973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.097987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.102422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.102460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.102473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.106875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.106912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.106926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.111430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.111467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.111480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.115937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.115974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.115988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.120393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.120429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.120443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.124915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.124952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.124966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.330 [2024-11-20 15:08:28.129404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.330 [2024-11-20 15:08:28.129442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.330 [2024-11-20 15:08:28.129456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.589 [2024-11-20 15:08:28.134057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.589 [2024-11-20 15:08:28.134095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.589 [2024-11-20 15:08:28.134109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.589 [2024-11-20 15:08:28.138658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.589 [2024-11-20 15:08:28.138695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.589 [2024-11-20 15:08:28.138711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.589 [2024-11-20 15:08:28.143112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.589 [2024-11-20 15:08:28.143149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.589 [2024-11-20 15:08:28.143163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.589 [2024-11-20 15:08:28.147524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.589 [2024-11-20 15:08:28.147560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.589 [2024-11-20 15:08:28.147574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.589 [2024-11-20 15:08:28.152025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.589 [2024-11-20 15:08:28.152061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.589 [2024-11-20 15:08:28.152074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.589 [2024-11-20 15:08:28.156685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.589 [2024-11-20 15:08:28.156724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.589 [2024-11-20 15:08:28.156738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.589 [2024-11-20 15:08:28.161194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.589 [2024-11-20 15:08:28.161231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.589 [2024-11-20 15:08:28.161245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.589 [2024-11-20 15:08:28.165635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.589 [2024-11-20 15:08:28.165683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.589 [2024-11-20 15:08:28.165697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.589 [2024-11-20 15:08:28.170055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.589 [2024-11-20 15:08:28.170092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.589 [2024-11-20 15:08:28.170106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.589 [2024-11-20 15:08:28.174593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.589 [2024-11-20 15:08:28.174630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.589 [2024-11-20 15:08:28.174658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.589 [2024-11-20 15:08:28.179129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.589 [2024-11-20 15:08:28.179165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.589 [2024-11-20 15:08:28.179179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.590 [2024-11-20 15:08:28.183777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.590 [2024-11-20 15:08:28.183813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.590 [2024-11-20 15:08:28.183827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.590 [2024-11-20 15:08:28.188288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.590 [2024-11-20 15:08:28.188325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.590 [2024-11-20 15:08:28.188340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.590 [2024-11-20 15:08:28.192848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.590 [2024-11-20 15:08:28.192884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.590 [2024-11-20 15:08:28.192898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.590 [2024-11-20 15:08:28.197453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.590 [2024-11-20 15:08:28.197491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.590 [2024-11-20 15:08:28.197505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.590 [2024-11-20 15:08:28.201977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.590 [2024-11-20 15:08:28.202014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.590 [2024-11-20 15:08:28.202028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.590 [2024-11-20 15:08:28.206511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.590 [2024-11-20 15:08:28.206548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.590 [2024-11-20 15:08:28.206561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.590 [2024-11-20 15:08:28.211127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.590 [2024-11-20 15:08:28.211163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.590 [2024-11-20 15:08:28.211177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.590 [2024-11-20 15:08:28.215714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.590 [2024-11-20 15:08:28.215751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.590 [2024-11-20 15:08:28.215765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.590 [2024-11-20 15:08:28.220273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.590 [2024-11-20 15:08:28.220310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.590 [2024-11-20 15:08:28.220325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.590 [2024-11-20 15:08:28.224978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.590 [2024-11-20 15:08:28.225016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.590 [2024-11-20 15:08:28.225030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.590 [2024-11-20 15:08:28.229567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.590 [2024-11-20 15:08:28.229605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.590 [2024-11-20 15:08:28.229619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.590 [2024-11-20 15:08:28.234017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.590 [2024-11-20 15:08:28.234054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.590 [2024-11-20 15:08:28.234068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.590 [2024-11-20 15:08:28.238536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.590 [2024-11-20 15:08:28.238573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.590 [2024-11-20 15:08:28.238586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.590 [2024-11-20 15:08:28.243084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.590 [2024-11-20 15:08:28.243121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.590 [2024-11-20 15:08:28.243134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.590 [2024-11-20 15:08:28.247600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.590 [2024-11-20 15:08:28.247652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.590 [2024-11-20 15:08:28.247668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.590 [2024-11-20 15:08:28.252132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.590 [2024-11-20 15:08:28.252170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.590 [2024-11-20 15:08:28.252184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.590 [2024-11-20 15:08:28.256729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.590 [2024-11-20 15:08:28.256775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.590 [2024-11-20 15:08:28.256790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.590 [2024-11-20 15:08:28.261323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.590 [2024-11-20 15:08:28.261360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.590 [2024-11-20 15:08:28.261374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.590 [2024-11-20 15:08:28.265960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.591 [2024-11-20 15:08:28.265997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.591 [2024-11-20 15:08:28.266010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.591 [2024-11-20 15:08:28.270677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.591 [2024-11-20 15:08:28.270714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.591 [2024-11-20 15:08:28.270728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.591 [2024-11-20 15:08:28.275284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.591 [2024-11-20 15:08:28.275322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.591 [2024-11-20 15:08:28.275336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.591 [2024-11-20 15:08:28.279820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.591 [2024-11-20 15:08:28.279857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.591 [2024-11-20 15:08:28.279871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.591 [2024-11-20 15:08:28.284307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.591 [2024-11-20 15:08:28.284346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.591 [2024-11-20 15:08:28.284360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.591 [2024-11-20 15:08:28.288847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.591 [2024-11-20 15:08:28.288884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.591 [2024-11-20 15:08:28.288898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.591 [2024-11-20 15:08:28.293474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.591 [2024-11-20 15:08:28.293513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.591 [2024-11-20 15:08:28.293527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.591 [2024-11-20 15:08:28.298135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.591 [2024-11-20 15:08:28.298174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.591 [2024-11-20 15:08:28.298188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.591 [2024-11-20 15:08:28.302689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.591 [2024-11-20 15:08:28.302726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.591 [2024-11-20 15:08:28.302740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.591 [2024-11-20 15:08:28.307273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.591 [2024-11-20 15:08:28.307311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.591 [2024-11-20 15:08:28.307325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.591 [2024-11-20 15:08:28.311858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.591 [2024-11-20 15:08:28.311894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.591 [2024-11-20 15:08:28.311908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.591 [2024-11-20 15:08:28.316373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.591 [2024-11-20 15:08:28.316410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.591 [2024-11-20 15:08:28.316424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.591 [2024-11-20 15:08:28.320898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.591 [2024-11-20 15:08:28.320934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.591 [2024-11-20 15:08:28.320948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.591 [2024-11-20 15:08:28.325458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.591 [2024-11-20 15:08:28.325496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.591 [2024-11-20 15:08:28.325511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.591 [2024-11-20 15:08:28.329965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.591 [2024-11-20 15:08:28.330001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.591 [2024-11-20 15:08:28.330015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.591 [2024-11-20 15:08:28.334364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.591 [2024-11-20 15:08:28.334400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.591 [2024-11-20 15:08:28.334414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.591 [2024-11-20 15:08:28.338822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.591 [2024-11-20 15:08:28.338859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.591 [2024-11-20 15:08:28.338873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.591 [2024-11-20 15:08:28.343387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.591 [2024-11-20 15:08:28.343424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.591 [2024-11-20 15:08:28.343438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.591 [2024-11-20 15:08:28.347931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.591 [2024-11-20 15:08:28.347968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.591 [2024-11-20 15:08:28.347982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.591 [2024-11-20 15:08:28.352394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.592 [2024-11-20 15:08:28.352431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.592 [2024-11-20 15:08:28.352444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.592 [2024-11-20 15:08:28.356914] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.592 [2024-11-20 15:08:28.356951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.592 [2024-11-20 15:08:28.356965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.592 [2024-11-20 15:08:28.361405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.592 [2024-11-20 15:08:28.361442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.592 [2024-11-20 15:08:28.361455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.592 [2024-11-20 15:08:28.365944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.592 [2024-11-20 15:08:28.365980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.592 [2024-11-20 15:08:28.365994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.592 [2024-11-20 15:08:28.370362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.592 [2024-11-20 15:08:28.370399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.592 [2024-11-20 15:08:28.370413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.592 [2024-11-20 15:08:28.374869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.592 [2024-11-20 15:08:28.374906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.592 [2024-11-20 15:08:28.374920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.592 [2024-11-20 15:08:28.379307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.592 [2024-11-20 15:08:28.379344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.592 [2024-11-20 15:08:28.379358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.592 [2024-11-20 15:08:28.383790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.592 [2024-11-20 15:08:28.383825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.592 [2024-11-20 15:08:28.383839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.592 [2024-11-20 15:08:28.388341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.592 [2024-11-20 15:08:28.388378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.592 [2024-11-20 15:08:28.388393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.850 [2024-11-20 15:08:28.392932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.850 [2024-11-20 15:08:28.392968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.850 [2024-11-20 15:08:28.392983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.850 [2024-11-20 15:08:28.397497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.850 [2024-11-20 15:08:28.397534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.850 [2024-11-20 15:08:28.397548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.850 [2024-11-20 15:08:28.402070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.850 [2024-11-20 15:08:28.402107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.850 [2024-11-20 15:08:28.402121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.850 [2024-11-20 15:08:28.406664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.850 [2024-11-20 15:08:28.406701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.850 [2024-11-20 15:08:28.406715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.850 [2024-11-20 15:08:28.411160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.850 [2024-11-20 15:08:28.411206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.850 [2024-11-20 15:08:28.411220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.850 [2024-11-20 15:08:28.415729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.850 [2024-11-20 15:08:28.415765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.850 [2024-11-20 15:08:28.415779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.850 [2024-11-20 15:08:28.420228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.851 [2024-11-20 15:08:28.420269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.851 [2024-11-20 15:08:28.420283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.851 [2024-11-20 15:08:28.424863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.851 [2024-11-20 15:08:28.424914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.851 [2024-11-20 15:08:28.424930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.851 [2024-11-20 15:08:28.429417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.851 [2024-11-20 15:08:28.429457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.851 [2024-11-20 15:08:28.429471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.851 [2024-11-20 15:08:28.433898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.851 [2024-11-20 15:08:28.433935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.851 [2024-11-20 15:08:28.433949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.851 [2024-11-20 15:08:28.438435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.851 [2024-11-20 15:08:28.438471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.851 [2024-11-20 15:08:28.438486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.851 [2024-11-20 15:08:28.442888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.851 [2024-11-20 15:08:28.442924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.851 [2024-11-20 15:08:28.442937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.851 [2024-11-20 15:08:28.447332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.851 [2024-11-20 15:08:28.447370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.851 [2024-11-20 15:08:28.447383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.851 [2024-11-20 15:08:28.451853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.851 [2024-11-20 15:08:28.451889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.851 [2024-11-20 15:08:28.451903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.851 [2024-11-20 15:08:28.456392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.851 [2024-11-20 15:08:28.456429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.851 [2024-11-20 15:08:28.456444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.851 [2024-11-20 15:08:28.460919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.851 [2024-11-20 15:08:28.460956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.851 [2024-11-20 15:08:28.460969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.851 [2024-11-20 15:08:28.465518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.851 [2024-11-20 15:08:28.465555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.851 [2024-11-20 15:08:28.465569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.851 [2024-11-20 15:08:28.470017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.851 [2024-11-20 15:08:28.470053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.851 [2024-11-20 15:08:28.470067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.851 [2024-11-20 15:08:28.474468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.851 [2024-11-20 15:08:28.474505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.851 [2024-11-20 15:08:28.474519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.851 [2024-11-20 15:08:28.478903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.851 [2024-11-20 15:08:28.478952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.851 [2024-11-20 15:08:28.478967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.851 [2024-11-20 15:08:28.483498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.851 [2024-11-20 15:08:28.483548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.851 [2024-11-20 15:08:28.483562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.851 [2024-11-20 15:08:28.488089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.851 [2024-11-20 15:08:28.488138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.851 [2024-11-20 15:08:28.488153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.851 [2024-11-20 15:08:28.492571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.852 [2024-11-20 15:08:28.492614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.852 [2024-11-20 15:08:28.492628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.852 [2024-11-20 15:08:28.497135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.852 [2024-11-20 15:08:28.497173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.852 [2024-11-20 15:08:28.497188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.852 [2024-11-20 15:08:28.501673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.852 [2024-11-20 15:08:28.501708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.852 [2024-11-20 15:08:28.501722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.852 [2024-11-20 15:08:28.506130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.852 [2024-11-20 15:08:28.506167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.852 [2024-11-20 15:08:28.506181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.852 [2024-11-20 15:08:28.510621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.852 [2024-11-20 15:08:28.510669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.852 [2024-11-20 15:08:28.510684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.852 [2024-11-20 15:08:28.515186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.852 [2024-11-20 15:08:28.515231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.852 [2024-11-20 15:08:28.515245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.852 [2024-11-20 15:08:28.519784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.852 [2024-11-20 15:08:28.519820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.852 [2024-11-20 15:08:28.519835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.852 [2024-11-20 15:08:28.524271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.852 [2024-11-20 15:08:28.524307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.852 [2024-11-20 15:08:28.524321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.852 [2024-11-20 15:08:28.528664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.852 [2024-11-20 15:08:28.528701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.852 [2024-11-20 15:08:28.528715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.852 [2024-11-20 15:08:28.533239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.852 [2024-11-20 15:08:28.533280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.852 [2024-11-20 15:08:28.533294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.852 [2024-11-20 15:08:28.537816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.852 [2024-11-20 15:08:28.537853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.852 [2024-11-20 15:08:28.537867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.852 [2024-11-20 15:08:28.542281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.852 [2024-11-20 15:08:28.542318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.852 [2024-11-20 15:08:28.542332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.852 [2024-11-20 15:08:28.546801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.852 [2024-11-20 15:08:28.546837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.852 [2024-11-20 15:08:28.546851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.852 [2024-11-20 15:08:28.551347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.852 [2024-11-20 15:08:28.551384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.852 [2024-11-20 15:08:28.551398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.852 [2024-11-20 15:08:28.556022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.852 [2024-11-20 15:08:28.556060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.852 [2024-11-20 15:08:28.556073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.852 [2024-11-20 15:08:28.560587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.852 [2024-11-20 15:08:28.560625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.852 [2024-11-20 15:08:28.560652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.852 [2024-11-20 15:08:28.565177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.852 [2024-11-20 15:08:28.565214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.853 [2024-11-20 15:08:28.565228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.853 [2024-11-20 15:08:28.569722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.853 [2024-11-20 15:08:28.569778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.853 [2024-11-20 15:08:28.569793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.853 [2024-11-20 15:08:28.574279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.853 [2024-11-20 15:08:28.574317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.853 [2024-11-20 15:08:28.574331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.853 [2024-11-20 15:08:28.578838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.853 [2024-11-20 15:08:28.578874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.853 [2024-11-20 15:08:28.578888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.853 [2024-11-20 15:08:28.583382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.853 [2024-11-20 15:08:28.583423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.853 [2024-11-20 15:08:28.583437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.853 [2024-11-20 15:08:28.587903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.853 [2024-11-20 15:08:28.587939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.853 [2024-11-20 15:08:28.587954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.853 [2024-11-20 15:08:28.592371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.853 [2024-11-20 15:08:28.592408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.853 [2024-11-20 15:08:28.592421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.853 [2024-11-20 15:08:28.596903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.853 [2024-11-20 15:08:28.596940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.853 [2024-11-20 15:08:28.596954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.853 [2024-11-20 15:08:28.601529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.853 [2024-11-20 15:08:28.601569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.853 [2024-11-20 15:08:28.601583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.853 [2024-11-20 15:08:28.606155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.853 [2024-11-20 15:08:28.606194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.853 [2024-11-20 15:08:28.606208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.853 [2024-11-20 15:08:28.610698] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.853 [2024-11-20 15:08:28.610734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.853 [2024-11-20 15:08:28.610747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.853 [2024-11-20 15:08:28.615041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.853 [2024-11-20 15:08:28.615078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.853 [2024-11-20 15:08:28.615092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.853 [2024-11-20 15:08:28.619598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.853 [2024-11-20 15:08:28.619636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.853 [2024-11-20 15:08:28.619667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.853 [2024-11-20 15:08:28.624176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.853 [2024-11-20 15:08:28.624213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.853 [2024-11-20 15:08:28.624228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.853 [2024-11-20 15:08:28.628734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.853 [2024-11-20 15:08:28.628771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.853 [2024-11-20 15:08:28.628785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.853 [2024-11-20 15:08:28.633179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.853 [2024-11-20 15:08:28.633215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.853 [2024-11-20 15:08:28.633229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:57.853 [2024-11-20 15:08:28.637765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.853 [2024-11-20 15:08:28.637802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.853 [2024-11-20 15:08:28.637817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:57.853 [2024-11-20 15:08:28.642382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.853 [2024-11-20 15:08:28.642419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.853 [2024-11-20 15:08:28.642433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.853 [2024-11-20 15:08:28.647056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.853 [2024-11-20 15:08:28.647093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.853 [2024-11-20 15:08:28.647107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:57.853 [2024-11-20 15:08:28.651610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:57.853 [2024-11-20 15:08:28.651660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.853 [2024-11-20 15:08:28.651675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.113 [2024-11-20 15:08:28.656151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.113 [2024-11-20 15:08:28.656188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.113 [2024-11-20 15:08:28.656202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.113 [2024-11-20 15:08:28.660784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.113 [2024-11-20 15:08:28.660822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.113 [2024-11-20 15:08:28.660836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.113 [2024-11-20 15:08:28.665363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.113 [2024-11-20 15:08:28.665400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.113 [2024-11-20 15:08:28.665414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.113 [2024-11-20 15:08:28.669931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.113 [2024-11-20 15:08:28.669968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.113 [2024-11-20 15:08:28.669981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.113 [2024-11-20 15:08:28.674591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.113 [2024-11-20 15:08:28.674633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.113 [2024-11-20 15:08:28.674664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.113 [2024-11-20 15:08:28.679159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.113 [2024-11-20 15:08:28.679205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.113 [2024-11-20 15:08:28.679221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.113 [2024-11-20 15:08:28.683846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.113 [2024-11-20 15:08:28.683887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.113 [2024-11-20 15:08:28.683901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.113 [2024-11-20 15:08:28.688434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.113 [2024-11-20 15:08:28.688475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.113 [2024-11-20 15:08:28.688490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.113 [2024-11-20 15:08:28.693000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.113 [2024-11-20 15:08:28.693037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.113 [2024-11-20 15:08:28.693051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.113 [2024-11-20 15:08:28.697616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.113 [2024-11-20 15:08:28.697664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.113 [2024-11-20 15:08:28.697679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.113 [2024-11-20 15:08:28.702190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.113 [2024-11-20 15:08:28.702229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.702242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.706746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.706784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.706799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.711232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.711269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.711283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.715847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.715884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.715898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.720350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.720390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.720404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.724940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.724978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.724992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.729499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.729536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.729550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.734012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.734050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.734063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.738496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.738533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.738548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.743116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.743154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.743167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.747567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.747605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.747618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.752018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.752056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.752070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.756434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.756471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.756485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.761022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.761059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.761073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.765586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.765625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.765652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.770160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.770197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.770211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.774709] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.774746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.774760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.779269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.779309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.779323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.783842] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.783879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.783893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.788373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.788411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.788425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.792963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.793005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.793019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.797487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.797526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.797540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.802068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.802105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.802119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.806601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.806651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.806667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.811220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.811257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.811271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.815779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.815815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.815829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.820232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.820270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.820284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.824757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.824793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.824807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.829353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.829392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.829407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.833861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.833898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.833912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.838360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.838399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.838413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.842820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.842859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.842874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.847274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.847312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.847326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.851802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.851839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.851853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.114 [2024-11-20 15:08:28.856418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.114 [2024-11-20 15:08:28.856456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.114 [2024-11-20 15:08:28.856470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.115 [2024-11-20 15:08:28.860906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.115 [2024-11-20 15:08:28.860944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.115 [2024-11-20 15:08:28.860958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.115 [2024-11-20 15:08:28.865379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.115 [2024-11-20 15:08:28.865417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.115 [2024-11-20 15:08:28.865430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.115 [2024-11-20 15:08:28.869921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.115 [2024-11-20 15:08:28.869959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.115 [2024-11-20 15:08:28.869973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.115 [2024-11-20 15:08:28.874357] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.115 [2024-11-20 15:08:28.874394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.115 [2024-11-20 15:08:28.874408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.115 [2024-11-20 15:08:28.878845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.115 [2024-11-20 15:08:28.878881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.115 [2024-11-20 15:08:28.878895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.115 [2024-11-20 15:08:28.883352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.115 [2024-11-20 15:08:28.883389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.115 [2024-11-20 15:08:28.883403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.115 [2024-11-20 15:08:28.887851] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.115 [2024-11-20 15:08:28.887887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.115 [2024-11-20 15:08:28.887901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.115 [2024-11-20 15:08:28.892426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.115 [2024-11-20 15:08:28.892462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.115 [2024-11-20 15:08:28.892476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.115 [2024-11-20 15:08:28.896995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.115 [2024-11-20 15:08:28.897033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.115 [2024-11-20 15:08:28.897047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.115 [2024-11-20 15:08:28.901516] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.115 [2024-11-20 15:08:28.901552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.115 [2024-11-20 15:08:28.901565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.115 [2024-11-20 15:08:28.905944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.115 [2024-11-20 15:08:28.905980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.115 [2024-11-20 15:08:28.905995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.115 [2024-11-20 15:08:28.910501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.115 [2024-11-20 15:08:28.910540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.115 [2024-11-20 15:08:28.910554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.372 [2024-11-20 15:08:28.915120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.372 [2024-11-20 15:08:28.915159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.372 [2024-11-20 15:08:28.915173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.372 [2024-11-20 15:08:28.919770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.372 [2024-11-20 15:08:28.919808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.372 [2024-11-20 15:08:28.919822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.372 [2024-11-20 15:08:28.924275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.372 [2024-11-20 15:08:28.924315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.372 [2024-11-20 15:08:28.924329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.372 [2024-11-20 15:08:28.928852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.372 [2024-11-20 15:08:28.928890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.372 [2024-11-20 15:08:28.928908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.372 [2024-11-20 15:08:28.933530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.372 [2024-11-20 15:08:28.933571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.372 [2024-11-20 15:08:28.933586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.372 [2024-11-20 15:08:28.938057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.372 [2024-11-20 15:08:28.938094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.372 [2024-11-20 15:08:28.938108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.372 [2024-11-20 15:08:28.942552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.372 [2024-11-20 15:08:28.942590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.372 [2024-11-20 15:08:28.942604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.372 [2024-11-20 15:08:28.947087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.372 [2024-11-20 15:08:28.947125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.372 [2024-11-20 15:08:28.947139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.372 [2024-11-20 15:08:28.951553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.372 [2024-11-20 15:08:28.951590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.372 [2024-11-20 15:08:28.951603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.372 [2024-11-20 15:08:28.956105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.372 [2024-11-20 15:08:28.956143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.372 [2024-11-20 15:08:28.956157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.372 [2024-11-20 15:08:28.960521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.372 [2024-11-20 15:08:28.960559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.372 [2024-11-20 15:08:28.960572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.372 [2024-11-20 15:08:28.965031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.372 [2024-11-20 15:08:28.965068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.372 [2024-11-20 15:08:28.965082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.372 [2024-11-20 15:08:28.969525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.372 [2024-11-20 15:08:28.969564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.372 [2024-11-20 15:08:28.969578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.372 [2024-11-20 15:08:28.974053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.372 [2024-11-20 15:08:28.974090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.372 [2024-11-20 15:08:28.974104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.372 [2024-11-20 15:08:28.978603] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.372 [2024-11-20 15:08:28.978653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.372 [2024-11-20 15:08:28.978669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.372 [2024-11-20 15:08:28.983169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.372 [2024-11-20 15:08:28.983214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.372 [2024-11-20 15:08:28.983228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.372 [2024-11-20 15:08:28.987821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.372 [2024-11-20 15:08:28.987863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.372 [2024-11-20 15:08:28.987878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.373 [2024-11-20 15:08:28.992327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.373 [2024-11-20 15:08:28.992368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.373 [2024-11-20 15:08:28.992382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.373 [2024-11-20 15:08:28.996873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.373 [2024-11-20 15:08:28.996915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.373 [2024-11-20 15:08:28.996929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.373 [2024-11-20 15:08:29.001567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.373 [2024-11-20 15:08:29.001608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.373 [2024-11-20 15:08:29.001623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.373 [2024-11-20 15:08:29.006219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.373 [2024-11-20 15:08:29.006260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.373 [2024-11-20 15:08:29.006275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.373 [2024-11-20 15:08:29.010761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.373 [2024-11-20 15:08:29.010803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.373 [2024-11-20 15:08:29.010819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.373 [2024-11-20 15:08:29.015388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.373 [2024-11-20 15:08:29.015427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.373 [2024-11-20 15:08:29.015441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.373 [2024-11-20 15:08:29.020119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.373 [2024-11-20 15:08:29.020159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.373 [2024-11-20 15:08:29.020173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.373 [2024-11-20 15:08:29.024662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.373 [2024-11-20 15:08:29.024699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.373 [2024-11-20 15:08:29.024714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.373 [2024-11-20 15:08:29.029210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.373 [2024-11-20 15:08:29.029249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.373 [2024-11-20 15:08:29.029263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.373 [2024-11-20 15:08:29.034171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.373 [2024-11-20 15:08:29.034208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.373 [2024-11-20 15:08:29.034223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.373 [2024-11-20 15:08:29.038817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.373 [2024-11-20 15:08:29.038864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.373 [2024-11-20 15:08:29.038878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.373 [2024-11-20 15:08:29.043347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.373 [2024-11-20 15:08:29.043384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.373 [2024-11-20 15:08:29.043398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.373 [2024-11-20 15:08:29.047969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.373 [2024-11-20 15:08:29.048009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.373 [2024-11-20 15:08:29.048023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.373 [2024-11-20 15:08:29.052487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.373 [2024-11-20 15:08:29.052525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.373 [2024-11-20 15:08:29.052539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.373 [2024-11-20 15:08:29.057014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.373 [2024-11-20 15:08:29.057052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.373 [2024-11-20 15:08:29.057066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.373 [2024-11-20 15:08:29.061608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.373 [2024-11-20 15:08:29.061658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.373 [2024-11-20 15:08:29.061673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.373 [2024-11-20 15:08:29.066143] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.373 [2024-11-20 15:08:29.066186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.373 [2024-11-20 15:08:29.066200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.373 [2024-11-20 15:08:29.070783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.373 [2024-11-20 15:08:29.070823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.373 [2024-11-20 15:08:29.070838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.373 [2024-11-20 15:08:29.075365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.374 [2024-11-20 15:08:29.075403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.374 [2024-11-20 15:08:29.075417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.374 [2024-11-20 15:08:29.079890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.374 [2024-11-20 15:08:29.079926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.374 [2024-11-20 15:08:29.079941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.374 [2024-11-20 15:08:29.084402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.374 [2024-11-20 15:08:29.084441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.374 [2024-11-20 15:08:29.084454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.374 [2024-11-20 15:08:29.088919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.374 [2024-11-20 15:08:29.088955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.374 [2024-11-20 15:08:29.088970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.374 [2024-11-20 15:08:29.093480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.374 [2024-11-20 15:08:29.093518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.374 [2024-11-20 15:08:29.093531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.374 [2024-11-20 15:08:29.098030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.374 [2024-11-20 15:08:29.098069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.374 [2024-11-20 15:08:29.098084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.374 [2024-11-20 15:08:29.102587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.374 [2024-11-20 15:08:29.102624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.374 [2024-11-20 15:08:29.102653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.375 [2024-11-20 15:08:29.107161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.375 [2024-11-20 15:08:29.107210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.375 [2024-11-20 15:08:29.107225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.375 [2024-11-20 15:08:29.111677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.375 [2024-11-20 15:08:29.111713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.375 [2024-11-20 15:08:29.111727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.375 [2024-11-20 15:08:29.116146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.375 [2024-11-20 15:08:29.116184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.375 [2024-11-20 15:08:29.116198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.375 [2024-11-20 15:08:29.120550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.375 [2024-11-20 15:08:29.120587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.375 [2024-11-20 15:08:29.120601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.375 [2024-11-20 15:08:29.124987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.375 [2024-11-20 15:08:29.125024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.375 [2024-11-20 15:08:29.125038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.375 [2024-11-20 15:08:29.129496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.375 [2024-11-20 15:08:29.129533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.375 [2024-11-20 15:08:29.129547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.375 [2024-11-20 15:08:29.134095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.375 [2024-11-20 15:08:29.134130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.375 [2024-11-20 15:08:29.134144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.375 [2024-11-20 15:08:29.138558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.375 [2024-11-20 15:08:29.138595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.375 [2024-11-20 15:08:29.138609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.375 [2024-11-20 15:08:29.143060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.375 [2024-11-20 15:08:29.143096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.375 [2024-11-20 15:08:29.143109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.375 [2024-11-20 15:08:29.147658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.375 [2024-11-20 15:08:29.147693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.375 [2024-11-20 15:08:29.147706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.375 [2024-11-20 15:08:29.152189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.375 [2024-11-20 15:08:29.152225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.375 [2024-11-20 15:08:29.152239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.375 [2024-11-20 15:08:29.156777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.375 [2024-11-20 15:08:29.156814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.375 [2024-11-20 15:08:29.156828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.375 [2024-11-20 15:08:29.161299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.375 [2024-11-20 15:08:29.161336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.375 [2024-11-20 15:08:29.161350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.375 [2024-11-20 15:08:29.165810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.375 [2024-11-20 15:08:29.165846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.375 [2024-11-20 15:08:29.165860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.375 [2024-11-20 15:08:29.170336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.375 [2024-11-20 15:08:29.170373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.375 [2024-11-20 15:08:29.170387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.375 [2024-11-20 15:08:29.174862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.375 [2024-11-20 15:08:29.174898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.375 [2024-11-20 15:08:29.174912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.634 [2024-11-20 15:08:29.179368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.634 [2024-11-20 15:08:29.179406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.634 [2024-11-20 15:08:29.179420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.634 [2024-11-20 15:08:29.183918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.634 [2024-11-20 15:08:29.183955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.634 [2024-11-20 15:08:29.183969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.634 [2024-11-20 15:08:29.188459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.634 [2024-11-20 15:08:29.188497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.634 [2024-11-20 15:08:29.188511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.634 [2024-11-20 15:08:29.193026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.634 [2024-11-20 15:08:29.193063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.634 [2024-11-20 15:08:29.193077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.634 [2024-11-20 15:08:29.197515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.634 [2024-11-20 15:08:29.197552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.634 [2024-11-20 15:08:29.197567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.634 [2024-11-20 15:08:29.201998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.634 [2024-11-20 15:08:29.202035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.634 [2024-11-20 15:08:29.202049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.634 [2024-11-20 15:08:29.206478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.634 [2024-11-20 15:08:29.206515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.634 [2024-11-20 15:08:29.206528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.634 [2024-11-20 15:08:29.210981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.634 [2024-11-20 15:08:29.211017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.634 [2024-11-20 15:08:29.211031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.634 [2024-11-20 15:08:29.215493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.634 [2024-11-20 15:08:29.215530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.634 [2024-11-20 15:08:29.215544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.634 [2024-11-20 15:08:29.219992] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.634 [2024-11-20 15:08:29.220028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.634 [2024-11-20 15:08:29.220042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.634 [2024-11-20 15:08:29.224532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.634 [2024-11-20 15:08:29.224569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.634 [2024-11-20 15:08:29.224583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.634 [2024-11-20 15:08:29.229003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.634 [2024-11-20 15:08:29.229040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.634 [2024-11-20 15:08:29.229054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.634 [2024-11-20 15:08:29.233531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.634 [2024-11-20 15:08:29.233567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.634 [2024-11-20 15:08:29.233581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.634 [2024-11-20 15:08:29.238058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.634 [2024-11-20 15:08:29.238095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.634 [2024-11-20 15:08:29.238108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.634 [2024-11-20 15:08:29.242521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.634 [2024-11-20 15:08:29.242557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.634 [2024-11-20 15:08:29.242570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.635 [2024-11-20 15:08:29.247137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.635 [2024-11-20 15:08:29.247174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.635 [2024-11-20 15:08:29.247188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.635 [2024-11-20 15:08:29.251710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.635 [2024-11-20 15:08:29.251745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.635 [2024-11-20 15:08:29.251758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.635 [2024-11-20 15:08:29.256281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.635 [2024-11-20 15:08:29.256319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.635 [2024-11-20 15:08:29.256333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.635 [2024-11-20 15:08:29.260829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.635 [2024-11-20 15:08:29.260866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.635 [2024-11-20 15:08:29.260879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.635 [2024-11-20 15:08:29.265289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.635 [2024-11-20 15:08:29.265326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.635 [2024-11-20 15:08:29.265339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.635 [2024-11-20 15:08:29.269901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.635 [2024-11-20 15:08:29.269939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.635 [2024-11-20 15:08:29.269953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.635 [2024-11-20 15:08:29.274406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.635 [2024-11-20 15:08:29.274445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.635 [2024-11-20 15:08:29.274459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.635 [2024-11-20 15:08:29.278964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.635 [2024-11-20 15:08:29.279001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.635 [2024-11-20 15:08:29.279016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.635 [2024-11-20 15:08:29.283378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.635 [2024-11-20 15:08:29.283413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.635 [2024-11-20 15:08:29.283427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.635 [2024-11-20 15:08:29.288000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.635 [2024-11-20 15:08:29.288036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.635 [2024-11-20 15:08:29.288051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.635 [2024-11-20 15:08:29.292519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.635 [2024-11-20 15:08:29.292555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.635 [2024-11-20 15:08:29.292569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.635 [2024-11-20 15:08:29.297078] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.635 [2024-11-20 15:08:29.297114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.635 [2024-11-20 15:08:29.297128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.635 [2024-11-20 15:08:29.301661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.635 [2024-11-20 15:08:29.301700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.635 [2024-11-20 15:08:29.301714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.635 [2024-11-20 15:08:29.306197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.635 [2024-11-20 15:08:29.306234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.635 [2024-11-20 15:08:29.306248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.635 [2024-11-20 15:08:29.310790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.635 [2024-11-20 15:08:29.310827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.635 [2024-11-20 15:08:29.310840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.635 [2024-11-20 15:08:29.315310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.635 [2024-11-20 15:08:29.315346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.635 [2024-11-20 15:08:29.315360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.635 [2024-11-20 15:08:29.319832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.635 [2024-11-20 15:08:29.319868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.635 [2024-11-20 15:08:29.319882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.635 [2024-11-20 15:08:29.324346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.635 [2024-11-20 15:08:29.324383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.635 [2024-11-20 15:08:29.324397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.635 [2024-11-20 15:08:29.328833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.635 [2024-11-20 15:08:29.328869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.635 [2024-11-20 15:08:29.328883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.635 [2024-11-20 15:08:29.333375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.635 [2024-11-20 15:08:29.333412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.635 [2024-11-20 15:08:29.333426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.635 [2024-11-20 15:08:29.337847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.635 [2024-11-20 15:08:29.337884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.636 [2024-11-20 15:08:29.337897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.636 [2024-11-20 15:08:29.342330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.636 [2024-11-20 15:08:29.342366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.636 [2024-11-20 15:08:29.342379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.636 [2024-11-20 15:08:29.346910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.636 [2024-11-20 15:08:29.346947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.636 [2024-11-20 15:08:29.346960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.636 [2024-11-20 15:08:29.351429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.636 [2024-11-20 15:08:29.351465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.636 [2024-11-20 15:08:29.351478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.636 [2024-11-20 15:08:29.355922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.636 [2024-11-20 15:08:29.355959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.636 [2024-11-20 15:08:29.355973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.636 [2024-11-20 15:08:29.360492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.636 [2024-11-20 15:08:29.360529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.636 [2024-11-20 15:08:29.360543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.636 [2024-11-20 15:08:29.365033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.636 [2024-11-20 15:08:29.365069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.636 [2024-11-20 15:08:29.365083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.636 [2024-11-20 15:08:29.369661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.636 [2024-11-20 15:08:29.369712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.636 [2024-11-20 15:08:29.369727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.636 [2024-11-20 15:08:29.374345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.636 [2024-11-20 15:08:29.374387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.636 [2024-11-20 15:08:29.374401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.636 [2024-11-20 15:08:29.378921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.636 [2024-11-20 15:08:29.378958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.636 [2024-11-20 15:08:29.378972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.636 [2024-11-20 15:08:29.383505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.636 [2024-11-20 15:08:29.383541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.636 [2024-11-20 15:08:29.383554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.636 [2024-11-20 15:08:29.387997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.636 [2024-11-20 15:08:29.388034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.636 [2024-11-20 15:08:29.388049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.636 [2024-11-20 15:08:29.392533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.636 [2024-11-20 15:08:29.392570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.636 [2024-11-20 15:08:29.392584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.636 [2024-11-20 15:08:29.397123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.636 [2024-11-20 15:08:29.397161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.636 [2024-11-20 15:08:29.397175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.636 [2024-11-20 15:08:29.401722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.636 [2024-11-20 15:08:29.401754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.636 [2024-11-20 15:08:29.401789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.636 [2024-11-20 15:08:29.406233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.636 [2024-11-20 15:08:29.406272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.636 [2024-11-20 15:08:29.406286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.636 [2024-11-20 15:08:29.410747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.636 [2024-11-20 15:08:29.410783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.636 [2024-11-20 15:08:29.410797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.636 [2024-11-20 15:08:29.415238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.636 [2024-11-20 15:08:29.415275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.636 [2024-11-20 15:08:29.415289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.636 [2024-11-20 15:08:29.419677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.636 [2024-11-20 15:08:29.419713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.636 [2024-11-20 15:08:29.419727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.636 [2024-11-20 15:08:29.424214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.636 [2024-11-20 15:08:29.424251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.636 [2024-11-20 15:08:29.424265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.637 [2024-11-20 15:08:29.428790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.637 [2024-11-20 15:08:29.428827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.637 [2024-11-20 15:08:29.428841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.637 [2024-11-20 15:08:29.433292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.637 [2024-11-20 15:08:29.433330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.637 [2024-11-20 15:08:29.433344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.895 [2024-11-20 15:08:29.437711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.895 [2024-11-20 15:08:29.437747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.895 [2024-11-20 15:08:29.437761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:58.895 [2024-11-20 15:08:29.442123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.895 [2024-11-20 15:08:29.442160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.895 [2024-11-20 15:08:29.442173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:58.895 [2024-11-20 15:08:29.446493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.895 [2024-11-20 15:08:29.446530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.895 [2024-11-20 15:08:29.446544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:58.895 [2024-11-20 15:08:29.451024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7c35b0) 00:17:58.895 [2024-11-20 15:08:29.451060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:58.895 [2024-11-20 15:08:29.451074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:58.895 00:17:58.895 Latency(us) 00:17:58.895 [2024-11-20T15:08:29.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.895 [2024-11-20T15:08:29.699Z] Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:58.895 nvme0n1 : 2.00 6810.94 851.37 0.00 0.00 2346.04 2115.03 7208.96 00:17:58.895 [2024-11-20T15:08:29.699Z] =================================================================================================================== 00:17:58.895 [2024-11-20T15:08:29.699Z] Total : 6810.94 851.37 0.00 0.00 2346.04 2115.03 7208.96 00:17:58.895 0 00:17:58.895 15:08:29 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:58.895 15:08:29 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:58.895 15:08:29 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:58.895 15:08:29 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:58.895 | .driver_specific 00:17:58.895 | .nvme_error 00:17:58.895 | .status_code 00:17:58.895 | .command_transient_transport_error' 00:17:59.156 15:08:29 -- host/digest.sh@71 -- # (( 439 > 0 )) 00:17:59.156 15:08:29 -- host/digest.sh@73 -- # killprocess 83823 00:17:59.156 15:08:29 -- common/autotest_common.sh@936 -- # '[' -z 83823 ']' 00:17:59.156 15:08:29 -- common/autotest_common.sh@940 -- # kill -0 83823 00:17:59.156 15:08:29 -- common/autotest_common.sh@941 -- # uname 00:17:59.156 15:08:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:59.156 15:08:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83823 00:17:59.156 15:08:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:59.156 killing process with pid 83823 00:17:59.156 15:08:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:59.156 15:08:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83823' 00:17:59.156 Received shutdown signal, test time was about 2.000000 seconds 00:17:59.156 00:17:59.156 Latency(us) 00:17:59.156 [2024-11-20T15:08:29.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.156 [2024-11-20T15:08:29.960Z] =================================================================================================================== 00:17:59.156 [2024-11-20T15:08:29.960Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:59.156 15:08:29 -- common/autotest_common.sh@955 -- # kill 83823 00:17:59.156 15:08:29 -- common/autotest_common.sh@960 -- # wait 83823 00:17:59.156 15:08:29 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:17:59.156 15:08:29 -- host/digest.sh@54 -- # local rw bs qd 00:17:59.156 15:08:29 -- host/digest.sh@56 -- # rw=randwrite 00:17:59.156 15:08:29 -- host/digest.sh@56 -- # bs=4096 00:17:59.156 15:08:29 -- host/digest.sh@56 -- # qd=128 00:17:59.156 15:08:29 -- host/digest.sh@58 -- # bperfpid=83885 00:17:59.156 15:08:29 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:59.156 15:08:29 -- host/digest.sh@60 -- # waitforlisten 83885 /var/tmp/bperf.sock 00:17:59.156 15:08:29 -- common/autotest_common.sh@829 -- # '[' -z 83885 ']' 00:17:59.156 15:08:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:59.156 15:08:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:59.156 15:08:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:59.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:59.156 15:08:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:59.156 15:08:29 -- common/autotest_common.sh@10 -- # set +x 00:17:59.415 [2024-11-20 15:08:29.977423] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:59.415 [2024-11-20 15:08:29.977505] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83885 ] 00:17:59.415 [2024-11-20 15:08:30.107822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.415 [2024-11-20 15:08:30.148860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.673 15:08:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:59.673 15:08:30 -- common/autotest_common.sh@862 -- # return 0 00:17:59.673 15:08:30 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:59.673 15:08:30 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:59.932 15:08:30 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:59.932 15:08:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.932 15:08:30 -- common/autotest_common.sh@10 -- # set +x 00:17:59.932 15:08:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.932 15:08:30 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:59.932 15:08:30 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:00.190 nvme0n1 00:18:00.190 15:08:30 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:18:00.190 15:08:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.190 15:08:30 -- common/autotest_common.sh@10 -- # set +x 00:18:00.190 15:08:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.190 15:08:30 -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:00.190 15:08:30 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:00.190 Running I/O for 2 seconds... 00:18:00.190 [2024-11-20 15:08:30.957183] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190ddc00 00:18:00.190 [2024-11-20 15:08:30.958981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.190 [2024-11-20 15:08:30.959026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:00.190 [2024-11-20 15:08:30.976931] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190fef90 00:18:00.190 [2024-11-20 15:08:30.978682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.190 [2024-11-20 15:08:30.978723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:00.449 [2024-11-20 15:08:30.998631] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190ff3c8 00:18:00.449 [2024-11-20 15:08:31.000782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.449 [2024-11-20 15:08:31.000826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:00.449 [2024-11-20 15:08:31.020924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190feb58 00:18:00.449 [2024-11-20 15:08:31.022954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.449 [2024-11-20 15:08:31.022990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:00.449 [2024-11-20 15:08:31.042937] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190fe720 00:18:00.449 [2024-11-20 15:08:31.045013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.449 [2024-11-20 15:08:31.045049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:00.449 [2024-11-20 15:08:31.065173] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190fe2e8 00:18:00.449 [2024-11-20 15:08:31.067291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.449 [2024-11-20 15:08:31.067328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:00.449 [2024-11-20 15:08:31.087409] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190fdeb0 00:18:00.449 [2024-11-20 15:08:31.089421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.449 [2024-11-20 15:08:31.089460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:00.449 [2024-11-20 15:08:31.107345] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190fda78 00:18:00.449 [2024-11-20 15:08:31.108958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.449 [2024-11-20 15:08:31.108994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:00.449 [2024-11-20 15:08:31.126197] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190fd640 00:18:00.449 [2024-11-20 15:08:31.127815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.449 [2024-11-20 15:08:31.127857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:00.449 [2024-11-20 15:08:31.145112] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190fd208 00:18:00.449 [2024-11-20 15:08:31.146747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.449 [2024-11-20 15:08:31.146786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:00.449 [2024-11-20 15:08:31.164059] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190fcdd0 00:18:00.449 [2024-11-20 15:08:31.165648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.449 [2024-11-20 15:08:31.165687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:00.449 [2024-11-20 15:08:31.182979] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190fc998 00:18:00.449 [2024-11-20 15:08:31.184548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.449 [2024-11-20 15:08:31.184588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:00.449 [2024-11-20 15:08:31.200590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190fc560 00:18:00.449 [2024-11-20 15:08:31.201888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.449 [2024-11-20 15:08:31.201927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:00.449 [2024-11-20 15:08:31.217026] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190fc128 00:18:00.449 [2024-11-20 15:08:31.218399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.449 [2024-11-20 15:08:31.218437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:00.449 [2024-11-20 15:08:31.233541] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190fbcf0 00:18:00.449 [2024-11-20 15:08:31.234803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.449 [2024-11-20 15:08:31.234840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:00.449 [2024-11-20 15:08:31.249952] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190fb8b8 00:18:00.449 [2024-11-20 15:08:31.251178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.449 [2024-11-20 15:08:31.251234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:00.706 [2024-11-20 15:08:31.266312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190fb480 00:18:00.706 [2024-11-20 15:08:31.267555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.706 [2024-11-20 15:08:31.267610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:00.706 [2024-11-20 15:08:31.282846] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190fb048 00:18:00.706 [2024-11-20 15:08:31.284082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.706 [2024-11-20 15:08:31.284123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:00.706 [2024-11-20 15:08:31.299351] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190fac10 00:18:00.706 [2024-11-20 15:08:31.300547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.706 [2024-11-20 15:08:31.300583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:00.706 [2024-11-20 15:08:31.315796] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190fa7d8 00:18:00.706 [2024-11-20 15:08:31.316988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.706 [2024-11-20 15:08:31.317025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:00.706 [2024-11-20 15:08:31.332311] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190fa3a0 00:18:00.706 [2024-11-20 15:08:31.333498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.706 [2024-11-20 15:08:31.333536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:00.706 [2024-11-20 15:08:31.348814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f9f68 00:18:00.706 [2024-11-20 15:08:31.349984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.707 [2024-11-20 15:08:31.350022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:00.707 [2024-11-20 15:08:31.365244] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f9b30 00:18:00.707 [2024-11-20 15:08:31.366474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.707 [2024-11-20 15:08:31.366512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:00.707 [2024-11-20 15:08:31.381808] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f96f8 00:18:00.707 [2024-11-20 15:08:31.382974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.707 [2024-11-20 15:08:31.383014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:00.707 [2024-11-20 15:08:31.398712] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f92c0 00:18:00.707 [2024-11-20 15:08:31.399868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.707 [2024-11-20 15:08:31.399910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:00.707 [2024-11-20 15:08:31.416303] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f8e88 00:18:00.707 [2024-11-20 15:08:31.417437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.707 [2024-11-20 15:08:31.417473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:00.707 [2024-11-20 15:08:31.432734] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f8a50 00:18:00.707 [2024-11-20 15:08:31.433901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.707 [2024-11-20 15:08:31.433935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:00.707 [2024-11-20 15:08:31.449157] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f8618 00:18:00.707 [2024-11-20 15:08:31.450286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.707 [2024-11-20 15:08:31.450321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:00.707 [2024-11-20 15:08:31.465543] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f81e0 00:18:00.707 [2024-11-20 15:08:31.466633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.707 [2024-11-20 15:08:31.466679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:00.707 [2024-11-20 15:08:31.482924] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f7da8 00:18:00.707 [2024-11-20 15:08:31.484038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.707 [2024-11-20 15:08:31.484075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:00.707 [2024-11-20 15:08:31.499429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f7970 00:18:00.707 [2024-11-20 15:08:31.500525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.707 [2024-11-20 15:08:31.500561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:00.964 [2024-11-20 15:08:31.515869] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f7538 00:18:00.964 [2024-11-20 15:08:31.516944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.964 [2024-11-20 15:08:31.516981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:00.964 [2024-11-20 15:08:31.532533] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f7100 00:18:00.964 [2024-11-20 15:08:31.533692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.964 [2024-11-20 15:08:31.533743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:00.964 [2024-11-20 15:08:31.549441] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f6cc8 00:18:00.964 [2024-11-20 15:08:31.550509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.964 [2024-11-20 15:08:31.550548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:00.964 [2024-11-20 15:08:31.566022] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f6890 00:18:00.964 [2024-11-20 15:08:31.567076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.964 [2024-11-20 15:08:31.567115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:00.964 [2024-11-20 15:08:31.583445] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f6458 00:18:00.964 [2024-11-20 15:08:31.584778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.964 [2024-11-20 15:08:31.584816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:00.964 [2024-11-20 15:08:31.600468] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f6020 00:18:00.964 [2024-11-20 15:08:31.601488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.964 [2024-11-20 15:08:31.601524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:00.964 [2024-11-20 15:08:31.617046] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f5be8 00:18:00.964 [2024-11-20 15:08:31.618054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.964 [2024-11-20 15:08:31.618093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:00.964 [2024-11-20 15:08:31.633582] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f57b0 00:18:00.964 [2024-11-20 15:08:31.634576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.964 [2024-11-20 15:08:31.634616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:00.964 [2024-11-20 15:08:31.650134] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f5378 00:18:00.964 [2024-11-20 15:08:31.651119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.964 [2024-11-20 15:08:31.651155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:00.964 [2024-11-20 15:08:31.666486] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f4f40 00:18:00.964 [2024-11-20 15:08:31.667476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.964 [2024-11-20 15:08:31.667511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:00.964 [2024-11-20 15:08:31.682819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f4b08 00:18:00.964 [2024-11-20 15:08:31.683789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.964 [2024-11-20 15:08:31.683825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:00.964 [2024-11-20 15:08:31.699275] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f46d0 00:18:00.964 [2024-11-20 15:08:31.700234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.964 [2024-11-20 15:08:31.700270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:00.964 [2024-11-20 15:08:31.715760] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f4298 00:18:00.964 [2024-11-20 15:08:31.716703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.964 [2024-11-20 15:08:31.716742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:00.964 [2024-11-20 15:08:31.732181] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f3e60 00:18:00.964 [2024-11-20 15:08:31.733110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.964 [2024-11-20 15:08:31.733145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:00.964 [2024-11-20 15:08:31.748562] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f3a28 00:18:00.964 [2024-11-20 15:08:31.749473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.964 [2024-11-20 15:08:31.749508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:00.964 [2024-11-20 15:08:31.764984] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f35f0 00:18:00.964 [2024-11-20 15:08:31.765919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:00.964 [2024-11-20 15:08:31.765971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:01.223 [2024-11-20 15:08:31.783697] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f31b8 00:18:01.223 [2024-11-20 15:08:31.784797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.223 [2024-11-20 15:08:31.784839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:01.223 [2024-11-20 15:08:31.800916] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f2d80 00:18:01.223 [2024-11-20 15:08:31.802061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.223 [2024-11-20 15:08:31.802101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:01.223 [2024-11-20 15:08:31.818486] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f2948 00:18:01.223 [2024-11-20 15:08:31.819420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.223 [2024-11-20 15:08:31.819470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:01.223 [2024-11-20 15:08:31.835255] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f2510 00:18:01.223 [2024-11-20 15:08:31.836168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.224 [2024-11-20 15:08:31.836212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:01.224 [2024-11-20 15:08:31.852077] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f20d8 00:18:01.224 [2024-11-20 15:08:31.852983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.224 [2024-11-20 15:08:31.853029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:01.224 [2024-11-20 15:08:31.869005] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f1ca0 00:18:01.224 [2024-11-20 15:08:31.869891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.224 [2024-11-20 15:08:31.869935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:01.224 [2024-11-20 15:08:31.885665] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f1868 00:18:01.224 [2024-11-20 15:08:31.886534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.224 [2024-11-20 15:08:31.886575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:01.224 [2024-11-20 15:08:31.902493] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f1430 00:18:01.224 [2024-11-20 15:08:31.903382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.224 [2024-11-20 15:08:31.903424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:01.224 [2024-11-20 15:08:31.919120] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f0ff8 00:18:01.224 [2024-11-20 15:08:31.919978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.224 [2024-11-20 15:08:31.920020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:01.224 [2024-11-20 15:08:31.935854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f0bc0 00:18:01.224 [2024-11-20 15:08:31.936685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.224 [2024-11-20 15:08:31.936726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:01.224 [2024-11-20 15:08:31.952332] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f0788 00:18:01.224 [2024-11-20 15:08:31.953130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.224 [2024-11-20 15:08:31.953165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:01.224 [2024-11-20 15:08:31.968728] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190f0350 00:18:01.224 [2024-11-20 15:08:31.969500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.224 [2024-11-20 15:08:31.969530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:01.224 [2024-11-20 15:08:31.985260] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190eff18 00:18:01.224 [2024-11-20 15:08:31.986042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.224 [2024-11-20 15:08:31.986077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:01.224 [2024-11-20 15:08:32.002268] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190efae0 00:18:01.225 [2024-11-20 15:08:32.003064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.225 [2024-11-20 15:08:32.003106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:01.225 [2024-11-20 15:08:32.019030] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190ef6a8 00:18:01.225 [2024-11-20 15:08:32.019797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.225 [2024-11-20 15:08:32.019839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:01.484 [2024-11-20 15:08:32.037005] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190ef270 00:18:01.484 [2024-11-20 15:08:32.037758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.484 [2024-11-20 15:08:32.037799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:01.484 [2024-11-20 15:08:32.054179] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190eee38 00:18:01.484 [2024-11-20 15:08:32.054922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.484 [2024-11-20 15:08:32.054963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:01.484 [2024-11-20 15:08:32.070720] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190eea00 00:18:01.484 [2024-11-20 15:08:32.071469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.484 [2024-11-20 15:08:32.071509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:01.484 [2024-11-20 15:08:32.087171] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190ee5c8 00:18:01.484 [2024-11-20 15:08:32.087911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.484 [2024-11-20 15:08:32.087948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:01.484 [2024-11-20 15:08:32.103524] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190ee190 00:18:01.484 [2024-11-20 15:08:32.104228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.484 [2024-11-20 15:08:32.104263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:01.484 [2024-11-20 15:08:32.119905] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190edd58 00:18:01.484 [2024-11-20 15:08:32.120591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.484 [2024-11-20 15:08:32.120627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:01.484 [2024-11-20 15:08:32.136312] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190ed920 00:18:01.484 [2024-11-20 15:08:32.137003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.484 [2024-11-20 15:08:32.137038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:01.484 [2024-11-20 15:08:32.152688] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190ed4e8 00:18:01.484 [2024-11-20 15:08:32.153362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.484 [2024-11-20 15:08:32.153397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:01.484 [2024-11-20 15:08:32.169135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190ed0b0 00:18:01.484 [2024-11-20 15:08:32.169801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.484 [2024-11-20 15:08:32.169835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:01.484 [2024-11-20 15:08:32.185990] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190ecc78 00:18:01.484 [2024-11-20 15:08:32.186834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.484 [2024-11-20 15:08:32.186867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:01.484 [2024-11-20 15:08:32.203904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190ec840 00:18:01.484 [2024-11-20 15:08:32.204539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.484 [2024-11-20 15:08:32.204573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:01.484 [2024-11-20 15:08:32.221812] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190ec408 00:18:01.484 [2024-11-20 15:08:32.222464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.484 [2024-11-20 15:08:32.222514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:01.484 [2024-11-20 15:08:32.238322] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190ebfd0 00:18:01.484 [2024-11-20 15:08:32.238943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.484 [2024-11-20 15:08:32.238982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:01.484 [2024-11-20 15:08:32.255160] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190ebb98 00:18:01.484 [2024-11-20 15:08:32.255827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.484 [2024-11-20 15:08:32.255875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:01.484 [2024-11-20 15:08:32.272470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190eb760 00:18:01.484 [2024-11-20 15:08:32.273122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.484 [2024-11-20 15:08:32.273165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:01.749 [2024-11-20 15:08:32.289761] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190eb328 00:18:01.749 [2024-11-20 15:08:32.290383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.749 [2024-11-20 15:08:32.290430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:01.749 [2024-11-20 15:08:32.307166] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190eaef0 00:18:01.749 [2024-11-20 15:08:32.307794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.749 [2024-11-20 15:08:32.307842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:01.749 [2024-11-20 15:08:32.324592] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190eaab8 00:18:01.749 [2024-11-20 15:08:32.325194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.749 [2024-11-20 15:08:32.325242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:01.749 [2024-11-20 15:08:32.341493] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190ea680 00:18:01.749 [2024-11-20 15:08:32.342099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.749 [2024-11-20 15:08:32.342138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:01.749 [2024-11-20 15:08:32.358219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190ea248 00:18:01.749 [2024-11-20 15:08:32.358828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.749 [2024-11-20 15:08:32.358868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:01.749 [2024-11-20 15:08:32.374763] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e9e10 00:18:01.749 [2024-11-20 15:08:32.375311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.749 [2024-11-20 15:08:32.375346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:01.749 [2024-11-20 15:08:32.391244] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e99d8 00:18:01.749 [2024-11-20 15:08:32.391785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.749 [2024-11-20 15:08:32.391821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:01.749 [2024-11-20 15:08:32.407690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e95a0 00:18:01.749 [2024-11-20 15:08:32.408190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.749 [2024-11-20 15:08:32.408223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:01.749 [2024-11-20 15:08:32.424988] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e9168 00:18:01.749 [2024-11-20 15:08:32.425486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.749 [2024-11-20 15:08:32.425525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:01.749 [2024-11-20 15:08:32.441356] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e8d30 00:18:01.749 [2024-11-20 15:08:32.441852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.749 [2024-11-20 15:08:32.441885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:01.749 [2024-11-20 15:08:32.457779] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e88f8 00:18:01.749 [2024-11-20 15:08:32.458253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.749 [2024-11-20 15:08:32.458284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:01.749 [2024-11-20 15:08:32.474142] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e84c0 00:18:01.749 [2024-11-20 15:08:32.474607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.749 [2024-11-20 15:08:32.474657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:01.749 [2024-11-20 15:08:32.490486] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e8088 00:18:01.749 [2024-11-20 15:08:32.490950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.749 [2024-11-20 15:08:32.490978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:01.749 [2024-11-20 15:08:32.506864] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e7c50 00:18:01.749 [2024-11-20 15:08:32.507311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.749 [2024-11-20 15:08:32.507340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:01.749 [2024-11-20 15:08:32.523389] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e7818 00:18:01.749 [2024-11-20 15:08:32.523840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.749 [2024-11-20 15:08:32.523874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:01.749 [2024-11-20 15:08:32.539763] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e73e0 00:18:01.749 [2024-11-20 15:08:32.540186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:01.749 [2024-11-20 15:08:32.540214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:02.051 [2024-11-20 15:08:32.556232] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e6fa8 00:18:02.051 [2024-11-20 15:08:32.556660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.051 [2024-11-20 15:08:32.556690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:02.051 [2024-11-20 15:08:32.572675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e6b70 00:18:02.051 [2024-11-20 15:08:32.573079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.051 [2024-11-20 15:08:32.573114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:02.051 [2024-11-20 15:08:32.589314] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e6738 00:18:02.051 [2024-11-20 15:08:32.589736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.051 [2024-11-20 15:08:32.589783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:02.051 [2024-11-20 15:08:32.605717] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e6300 00:18:02.051 [2024-11-20 15:08:32.606264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.051 [2024-11-20 15:08:32.606296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:02.051 [2024-11-20 15:08:32.622504] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e5ec8 00:18:02.051 [2024-11-20 15:08:32.622916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.051 [2024-11-20 15:08:32.622956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:02.051 [2024-11-20 15:08:32.639013] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e5a90 00:18:02.051 [2024-11-20 15:08:32.639399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.051 [2024-11-20 15:08:32.639433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:02.051 [2024-11-20 15:08:32.655597] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e5658 00:18:02.051 [2024-11-20 15:08:32.655980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.051 [2024-11-20 15:08:32.656155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:02.051 [2024-11-20 15:08:32.672459] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e5220 00:18:02.051 [2024-11-20 15:08:32.672829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.051 [2024-11-20 15:08:32.672860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:02.051 [2024-11-20 15:08:32.688965] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e4de8 00:18:02.051 [2024-11-20 15:08:32.689313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.051 [2024-11-20 15:08:32.689359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:02.051 [2024-11-20 15:08:32.706178] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e49b0 00:18:02.051 [2024-11-20 15:08:32.706514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.051 [2024-11-20 15:08:32.706548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:02.051 [2024-11-20 15:08:32.722665] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e4578 00:18:02.051 [2024-11-20 15:08:32.722974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.051 [2024-11-20 15:08:32.723005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:02.051 [2024-11-20 15:08:32.739987] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e4140 00:18:02.051 [2024-11-20 15:08:32.740428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.052 [2024-11-20 15:08:32.740466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:02.052 [2024-11-20 15:08:32.756484] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e3d08 00:18:02.052 [2024-11-20 15:08:32.756791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.052 [2024-11-20 15:08:32.756821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:02.052 [2024-11-20 15:08:32.772854] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e38d0 00:18:02.052 [2024-11-20 15:08:32.773134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.052 [2024-11-20 15:08:32.773169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:02.052 [2024-11-20 15:08:32.789207] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e3498 00:18:02.052 [2024-11-20 15:08:32.789477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.052 [2024-11-20 15:08:32.789507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:02.052 [2024-11-20 15:08:32.805556] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e3060 00:18:02.052 [2024-11-20 15:08:32.805828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.052 [2024-11-20 15:08:32.805874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:02.052 [2024-11-20 15:08:32.821903] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e2c28 00:18:02.052 [2024-11-20 15:08:32.822157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.052 [2024-11-20 15:08:32.822195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:02.052 [2024-11-20 15:08:32.838382] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e27f0 00:18:02.052 [2024-11-20 15:08:32.838626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.052 [2024-11-20 15:08:32.838670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:02.310 [2024-11-20 15:08:32.854801] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e23b8 00:18:02.310 [2024-11-20 15:08:32.855169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.310 [2024-11-20 15:08:32.855209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:02.310 [2024-11-20 15:08:32.871338] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e1f80 00:18:02.310 [2024-11-20 15:08:32.871563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.310 [2024-11-20 15:08:32.871588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:02.310 [2024-11-20 15:08:32.887691] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e1b48 00:18:02.310 [2024-11-20 15:08:32.887899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.310 [2024-11-20 15:08:32.887922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:02.310 [2024-11-20 15:08:32.904021] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e1710 00:18:02.310 [2024-11-20 15:08:32.904218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.310 [2024-11-20 15:08:32.904241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:02.310 [2024-11-20 15:08:32.920399] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1684160) with pdu=0x2000190e12d8 00:18:02.310 [2024-11-20 15:08:32.920594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:02.310 [2024-11-20 15:08:32.920618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:02.310 00:18:02.310 Latency(us) 00:18:02.310 [2024-11-20T15:08:33.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.310 [2024-11-20T15:08:33.114Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:02.310 nvme0n1 : 2.00 14841.37 57.97 0.00 0.00 8616.08 7685.59 27405.96 00:18:02.310 [2024-11-20T15:08:33.114Z] =================================================================================================================== 00:18:02.310 [2024-11-20T15:08:33.114Z] Total : 14841.37 57.97 0.00 0.00 8616.08 7685.59 27405.96 00:18:02.310 0 00:18:02.310 15:08:32 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:02.310 15:08:32 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:02.310 15:08:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:02.310 15:08:32 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:02.310 | .driver_specific 00:18:02.310 | .nvme_error 00:18:02.310 | .status_code 00:18:02.310 | .command_transient_transport_error' 00:18:02.569 15:08:33 -- host/digest.sh@71 -- # (( 116 > 0 )) 00:18:02.569 15:08:33 -- host/digest.sh@73 -- # killprocess 83885 00:18:02.569 15:08:33 -- common/autotest_common.sh@936 -- # '[' -z 83885 ']' 00:18:02.569 15:08:33 -- common/autotest_common.sh@940 -- # kill -0 83885 00:18:02.569 15:08:33 -- common/autotest_common.sh@941 -- # uname 00:18:02.569 15:08:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:02.569 15:08:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83885 00:18:02.569 killing process with pid 83885 00:18:02.569 Received shutdown signal, test time was about 2.000000 seconds 00:18:02.569 00:18:02.569 Latency(us) 00:18:02.569 [2024-11-20T15:08:33.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.569 [2024-11-20T15:08:33.373Z] =================================================================================================================== 00:18:02.569 [2024-11-20T15:08:33.373Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:02.569 15:08:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:02.569 15:08:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:02.569 15:08:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83885' 00:18:02.569 15:08:33 -- common/autotest_common.sh@955 -- # kill 83885 00:18:02.570 15:08:33 -- common/autotest_common.sh@960 -- # wait 83885 00:18:02.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:18:02.829 15:08:33 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:18:02.829 15:08:33 -- host/digest.sh@54 -- # local rw bs qd 00:18:02.829 15:08:33 -- host/digest.sh@56 -- # rw=randwrite 00:18:02.829 15:08:33 -- host/digest.sh@56 -- # bs=131072 00:18:02.829 15:08:33 -- host/digest.sh@56 -- # qd=16 00:18:02.829 15:08:33 -- host/digest.sh@58 -- # bperfpid=83932 00:18:02.829 15:08:33 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:18:02.829 15:08:33 -- host/digest.sh@60 -- # waitforlisten 83932 /var/tmp/bperf.sock 00:18:02.829 15:08:33 -- common/autotest_common.sh@829 -- # '[' -z 83932 ']' 00:18:02.829 15:08:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:18:02.829 15:08:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:02.829 15:08:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:18:02.829 15:08:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:02.829 15:08:33 -- common/autotest_common.sh@10 -- # set +x 00:18:02.829 [2024-11-20 15:08:33.493345] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:02.829 [2024-11-20 15:08:33.493705] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83932 ] 00:18:02.829 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:02.829 Zero copy mechanism will not be used. 00:18:02.829 [2024-11-20 15:08:33.632386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.087 [2024-11-20 15:08:33.668589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.024 15:08:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.024 15:08:34 -- common/autotest_common.sh@862 -- # return 0 00:18:04.024 15:08:34 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:04.024 15:08:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:18:04.024 15:08:34 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:18:04.024 15:08:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.024 15:08:34 -- common/autotest_common.sh@10 -- # set +x 00:18:04.024 15:08:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.024 15:08:34 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:04.024 15:08:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:18:04.592 nvme0n1 00:18:04.592 15:08:35 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:18:04.592 15:08:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.592 15:08:35 -- common/autotest_common.sh@10 -- # set +x 00:18:04.592 15:08:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.592 15:08:35 -- host/digest.sh@69 -- # bperf_py perform_tests 00:18:04.592 15:08:35 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:18:04.592 I/O size of 131072 is greater than zero copy threshold (65536). 00:18:04.592 Zero copy mechanism will not be used. 00:18:04.592 Running I/O for 2 seconds... 00:18:04.592 [2024-11-20 15:08:35.294045] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.592 [2024-11-20 15:08:35.294534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.592 [2024-11-20 15:08:35.294567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.592 [2024-11-20 15:08:35.299371] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.592 [2024-11-20 15:08:35.299700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.592 [2024-11-20 15:08:35.299732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.592 [2024-11-20 15:08:35.304444] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.592 [2024-11-20 15:08:35.304770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.592 [2024-11-20 15:08:35.304807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.592 [2024-11-20 15:08:35.309525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.592 [2024-11-20 15:08:35.309989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.592 [2024-11-20 15:08:35.310025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.592 [2024-11-20 15:08:35.314759] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.592 [2024-11-20 15:08:35.315070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.592 [2024-11-20 15:08:35.315102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.592 [2024-11-20 15:08:35.319894] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.592 [2024-11-20 15:08:35.320204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.592 [2024-11-20 15:08:35.320235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.592 [2024-11-20 15:08:35.325010] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.592 [2024-11-20 15:08:35.325323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.592 [2024-11-20 15:08:35.325354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.592 [2024-11-20 15:08:35.330113] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.592 [2024-11-20 15:08:35.330438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.592 [2024-11-20 15:08:35.330469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.592 [2024-11-20 15:08:35.335220] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.592 [2024-11-20 15:08:35.335536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.592 [2024-11-20 15:08:35.335568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.592 [2024-11-20 15:08:35.340296] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.592 [2024-11-20 15:08:35.340614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.592 [2024-11-20 15:08:35.340655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.592 [2024-11-20 15:08:35.345382] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.593 [2024-11-20 15:08:35.345828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.593 [2024-11-20 15:08:35.345864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.593 [2024-11-20 15:08:35.350600] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.593 [2024-11-20 15:08:35.350929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.593 [2024-11-20 15:08:35.350971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.593 [2024-11-20 15:08:35.355659] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.593 [2024-11-20 15:08:35.355967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.593 [2024-11-20 15:08:35.355997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.593 [2024-11-20 15:08:35.360737] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.593 [2024-11-20 15:08:35.361046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.593 [2024-11-20 15:08:35.361076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.593 [2024-11-20 15:08:35.365794] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.593 [2024-11-20 15:08:35.366102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.593 [2024-11-20 15:08:35.366133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.593 [2024-11-20 15:08:35.370904] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.593 [2024-11-20 15:08:35.371226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.593 [2024-11-20 15:08:35.371257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.593 [2024-11-20 15:08:35.376000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.593 [2024-11-20 15:08:35.376327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.593 [2024-11-20 15:08:35.376360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.593 [2024-11-20 15:08:35.381196] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.593 [2024-11-20 15:08:35.381653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.593 [2024-11-20 15:08:35.381688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.593 [2024-11-20 15:08:35.386380] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.593 [2024-11-20 15:08:35.386715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.593 [2024-11-20 15:08:35.386747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.593 [2024-11-20 15:08:35.391559] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.593 [2024-11-20 15:08:35.391887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.593 [2024-11-20 15:08:35.391924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.853 [2024-11-20 15:08:35.396875] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.853 [2024-11-20 15:08:35.397186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.853 [2024-11-20 15:08:35.397216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.853 [2024-11-20 15:08:35.402093] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.853 [2024-11-20 15:08:35.402438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.853 [2024-11-20 15:08:35.402476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.853 [2024-11-20 15:08:35.407156] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.853 [2024-11-20 15:08:35.407487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.853 [2024-11-20 15:08:35.407521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.853 [2024-11-20 15:08:35.412306] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.853 [2024-11-20 15:08:35.412760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.853 [2024-11-20 15:08:35.412799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.853 [2024-11-20 15:08:35.417617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.853 [2024-11-20 15:08:35.417954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.853 [2024-11-20 15:08:35.417988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.853 [2024-11-20 15:08:35.422918] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.853 [2024-11-20 15:08:35.423255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.853 [2024-11-20 15:08:35.423300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.853 [2024-11-20 15:08:35.428357] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.853 [2024-11-20 15:08:35.428831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.853 [2024-11-20 15:08:35.428867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.853 [2024-11-20 15:08:35.433853] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.853 [2024-11-20 15:08:35.434186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.853 [2024-11-20 15:08:35.434217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.853 [2024-11-20 15:08:35.439106] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.853 [2024-11-20 15:08:35.439435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.853 [2024-11-20 15:08:35.439465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.853 [2024-11-20 15:08:35.444329] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.853 [2024-11-20 15:08:35.444778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.853 [2024-11-20 15:08:35.444813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.853 [2024-11-20 15:08:35.449589] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.853 [2024-11-20 15:08:35.449918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.853 [2024-11-20 15:08:35.449951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.853 [2024-11-20 15:08:35.454690] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.853 [2024-11-20 15:08:35.455001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.853 [2024-11-20 15:08:35.455032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.853 [2024-11-20 15:08:35.459785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.853 [2024-11-20 15:08:35.460094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.853 [2024-11-20 15:08:35.460125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.853 [2024-11-20 15:08:35.464881] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.853 [2024-11-20 15:08:35.465192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.853 [2024-11-20 15:08:35.465223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.853 [2024-11-20 15:08:35.469942] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.853 [2024-11-20 15:08:35.470250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.853 [2024-11-20 15:08:35.470281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.853 [2024-11-20 15:08:35.475028] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.853 [2024-11-20 15:08:35.475346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.853 [2024-11-20 15:08:35.475377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.853 [2024-11-20 15:08:35.480310] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.853 [2024-11-20 15:08:35.480766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.853 [2024-11-20 15:08:35.480802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.853 [2024-11-20 15:08:35.485725] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.853 [2024-11-20 15:08:35.486038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.853 [2024-11-20 15:08:35.486070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.853 [2024-11-20 15:08:35.491064] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.853 [2024-11-20 15:08:35.491389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.853 [2024-11-20 15:08:35.491420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.853 [2024-11-20 15:08:35.496299] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.853 [2024-11-20 15:08:35.496766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.853 [2024-11-20 15:08:35.496803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.854 [2024-11-20 15:08:35.501550] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.854 [2024-11-20 15:08:35.501888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.854 [2024-11-20 15:08:35.501923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.854 [2024-11-20 15:08:35.506719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.854 [2024-11-20 15:08:35.507033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.854 [2024-11-20 15:08:35.507065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.854 [2024-11-20 15:08:35.512017] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.854 [2024-11-20 15:08:35.512481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.854 [2024-11-20 15:08:35.512516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.854 [2024-11-20 15:08:35.517378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.854 [2024-11-20 15:08:35.517718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.854 [2024-11-20 15:08:35.517759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.854 [2024-11-20 15:08:35.522523] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.854 [2024-11-20 15:08:35.522847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.854 [2024-11-20 15:08:35.522882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.854 [2024-11-20 15:08:35.527583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.854 [2024-11-20 15:08:35.528038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.854 [2024-11-20 15:08:35.528073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.854 [2024-11-20 15:08:35.532837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.854 [2024-11-20 15:08:35.533151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.854 [2024-11-20 15:08:35.533182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.854 [2024-11-20 15:08:35.537939] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.854 [2024-11-20 15:08:35.538270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.854 [2024-11-20 15:08:35.538301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.854 [2024-11-20 15:08:35.543052] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.854 [2024-11-20 15:08:35.543495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.854 [2024-11-20 15:08:35.543530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.854 [2024-11-20 15:08:35.548237] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.854 [2024-11-20 15:08:35.548545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.854 [2024-11-20 15:08:35.548576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.854 [2024-11-20 15:08:35.553398] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.854 [2024-11-20 15:08:35.553721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.854 [2024-11-20 15:08:35.553752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.854 [2024-11-20 15:08:35.558735] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.854 [2024-11-20 15:08:35.559049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.854 [2024-11-20 15:08:35.559079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.854 [2024-11-20 15:08:35.563875] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.854 [2024-11-20 15:08:35.564187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.854 [2024-11-20 15:08:35.564218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.854 [2024-11-20 15:08:35.568944] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.854 [2024-11-20 15:08:35.569254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.854 [2024-11-20 15:08:35.569284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.854 [2024-11-20 15:08:35.574020] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.854 [2024-11-20 15:08:35.574331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.854 [2024-11-20 15:08:35.574361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.854 [2024-11-20 15:08:35.579126] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.854 [2024-11-20 15:08:35.579450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.854 [2024-11-20 15:08:35.579480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.854 [2024-11-20 15:08:35.584205] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.854 [2024-11-20 15:08:35.584522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.854 [2024-11-20 15:08:35.584553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.854 [2024-11-20 15:08:35.589256] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.854 [2024-11-20 15:08:35.589568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.854 [2024-11-20 15:08:35.589598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.854 [2024-11-20 15:08:35.594332] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.854 [2024-11-20 15:08:35.594778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.854 [2024-11-20 15:08:35.594814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.854 [2024-11-20 15:08:35.599546] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.854 [2024-11-20 15:08:35.599868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.854 [2024-11-20 15:08:35.599901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.854 [2024-11-20 15:08:35.604613] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.854 [2024-11-20 15:08:35.604929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.854 [2024-11-20 15:08:35.604960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.854 [2024-11-20 15:08:35.609675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.854 [2024-11-20 15:08:35.609987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.854 [2024-11-20 15:08:35.610019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.855 [2024-11-20 15:08:35.614713] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.855 [2024-11-20 15:08:35.615024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.855 [2024-11-20 15:08:35.615054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.855 [2024-11-20 15:08:35.619808] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.855 [2024-11-20 15:08:35.620117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.855 [2024-11-20 15:08:35.620147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.855 [2024-11-20 15:08:35.624822] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.855 [2024-11-20 15:08:35.625129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.855 [2024-11-20 15:08:35.625158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.855 [2024-11-20 15:08:35.629837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.855 [2024-11-20 15:08:35.630149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.855 [2024-11-20 15:08:35.630179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:04.855 [2024-11-20 15:08:35.634853] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.855 [2024-11-20 15:08:35.635162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.855 [2024-11-20 15:08:35.635202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:04.855 [2024-11-20 15:08:35.639941] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.855 [2024-11-20 15:08:35.640247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.855 [2024-11-20 15:08:35.640277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:04.855 [2024-11-20 15:08:35.644985] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.855 [2024-11-20 15:08:35.645291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.855 [2024-11-20 15:08:35.645324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:04.855 [2024-11-20 15:08:35.650041] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:04.855 [2024-11-20 15:08:35.650350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:04.855 [2024-11-20 15:08:35.650380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.116 [2024-11-20 15:08:35.655304] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.116 [2024-11-20 15:08:35.655618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.116 [2024-11-20 15:08:35.655659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.116 [2024-11-20 15:08:35.660887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.116 [2024-11-20 15:08:35.661219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.116 [2024-11-20 15:08:35.661249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.116 [2024-11-20 15:08:35.666024] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.116 [2024-11-20 15:08:35.666340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.116 [2024-11-20 15:08:35.666370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.116 [2024-11-20 15:08:35.671087] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.116 [2024-11-20 15:08:35.671417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.116 [2024-11-20 15:08:35.671448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.116 [2024-11-20 15:08:35.676144] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.116 [2024-11-20 15:08:35.676458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.116 [2024-11-20 15:08:35.676489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.116 [2024-11-20 15:08:35.681376] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.116 [2024-11-20 15:08:35.681910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.116 [2024-11-20 15:08:35.681945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.116 [2024-11-20 15:08:35.689000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.116 [2024-11-20 15:08:35.689394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.116 [2024-11-20 15:08:35.689434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.116 [2024-11-20 15:08:35.694422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.116 [2024-11-20 15:08:35.694757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.116 [2024-11-20 15:08:35.694790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.116 [2024-11-20 15:08:35.699948] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.116 [2024-11-20 15:08:35.700283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.116 [2024-11-20 15:08:35.700316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.116 [2024-11-20 15:08:35.705731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.116 [2024-11-20 15:08:35.706044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.116 [2024-11-20 15:08:35.706077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.116 [2024-11-20 15:08:35.711240] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.116 [2024-11-20 15:08:35.711575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.116 [2024-11-20 15:08:35.711607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.116 [2024-11-20 15:08:35.717263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.116 [2024-11-20 15:08:35.717726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.116 [2024-11-20 15:08:35.717762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.116 [2024-11-20 15:08:35.722675] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.116 [2024-11-20 15:08:35.722988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.116 [2024-11-20 15:08:35.723020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.116 [2024-11-20 15:08:35.727778] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.116 [2024-11-20 15:08:35.728088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.116 [2024-11-20 15:08:35.728119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.117 [2024-11-20 15:08:35.732862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.117 [2024-11-20 15:08:35.733176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.117 [2024-11-20 15:08:35.733206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.117 [2024-11-20 15:08:35.738108] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.117 [2024-11-20 15:08:35.738417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.117 [2024-11-20 15:08:35.738449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.117 [2024-11-20 15:08:35.743487] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.117 [2024-11-20 15:08:35.743816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.117 [2024-11-20 15:08:35.743859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.117 [2024-11-20 15:08:35.748702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.117 [2024-11-20 15:08:35.749014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.117 [2024-11-20 15:08:35.749045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.117 [2024-11-20 15:08:35.753887] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.117 [2024-11-20 15:08:35.754203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.117 [2024-11-20 15:08:35.754234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.117 [2024-11-20 15:08:35.758974] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.117 [2024-11-20 15:08:35.759306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.117 [2024-11-20 15:08:35.759337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.117 [2024-11-20 15:08:35.764073] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.117 [2024-11-20 15:08:35.764523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.117 [2024-11-20 15:08:35.764558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.117 [2024-11-20 15:08:35.769378] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.117 [2024-11-20 15:08:35.769705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.117 [2024-11-20 15:08:35.769746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.117 [2024-11-20 15:08:35.774515] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.117 [2024-11-20 15:08:35.774843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.117 [2024-11-20 15:08:35.774884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.117 [2024-11-20 15:08:35.779701] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.117 [2024-11-20 15:08:35.780013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.117 [2024-11-20 15:08:35.780053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.117 [2024-11-20 15:08:35.785122] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.117 [2024-11-20 15:08:35.785460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.117 [2024-11-20 15:08:35.785552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.117 [2024-11-20 15:08:35.790387] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.117 [2024-11-20 15:08:35.790714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.117 [2024-11-20 15:08:35.790755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.117 [2024-11-20 15:08:35.795605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.117 [2024-11-20 15:08:35.796061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.117 [2024-11-20 15:08:35.796096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.117 [2024-11-20 15:08:35.800837] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.117 [2024-11-20 15:08:35.801155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.117 [2024-11-20 15:08:35.801185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.117 [2024-11-20 15:08:35.805934] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.117 [2024-11-20 15:08:35.806247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.117 [2024-11-20 15:08:35.806287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.117 [2024-11-20 15:08:35.811069] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.117 [2024-11-20 15:08:35.811388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.117 [2024-11-20 15:08:35.811429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.117 [2024-11-20 15:08:35.816383] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.117 [2024-11-20 15:08:35.816712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.117 [2024-11-20 15:08:35.816743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.117 [2024-11-20 15:08:35.821609] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.117 [2024-11-20 15:08:35.821936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.117 [2024-11-20 15:08:35.821974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.117 [2024-11-20 15:08:35.826719] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.117 [2024-11-20 15:08:35.827028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.117 [2024-11-20 15:08:35.827068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.117 [2024-11-20 15:08:35.831775] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.117 [2024-11-20 15:08:35.832088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.117 [2024-11-20 15:08:35.832118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.117 [2024-11-20 15:08:35.836870] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.117 [2024-11-20 15:08:35.837178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.118 [2024-11-20 15:08:35.837265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.118 [2024-11-20 15:08:35.842039] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.118 [2024-11-20 15:08:35.842347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.118 [2024-11-20 15:08:35.842387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.118 [2024-11-20 15:08:35.847068] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.118 [2024-11-20 15:08:35.847383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.118 [2024-11-20 15:08:35.847423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.118 [2024-11-20 15:08:35.852188] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.118 [2024-11-20 15:08:35.852501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.118 [2024-11-20 15:08:35.852533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.118 [2024-11-20 15:08:35.857263] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.118 [2024-11-20 15:08:35.857583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.118 [2024-11-20 15:08:35.857624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.118 [2024-11-20 15:08:35.862358] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.118 [2024-11-20 15:08:35.862681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.118 [2024-11-20 15:08:35.862717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.118 [2024-11-20 15:08:35.867392] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.118 [2024-11-20 15:08:35.867844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.118 [2024-11-20 15:08:35.867878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.118 [2024-11-20 15:08:35.872617] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.118 [2024-11-20 15:08:35.872944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.118 [2024-11-20 15:08:35.872978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.118 [2024-11-20 15:08:35.877710] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.118 [2024-11-20 15:08:35.878017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.118 [2024-11-20 15:08:35.878057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.118 [2024-11-20 15:08:35.882830] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.118 [2024-11-20 15:08:35.883139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.118 [2024-11-20 15:08:35.883169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.118 [2024-11-20 15:08:35.887876] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.118 [2024-11-20 15:08:35.888188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.118 [2024-11-20 15:08:35.888276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.118 [2024-11-20 15:08:35.893011] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.118 [2024-11-20 15:08:35.893323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.118 [2024-11-20 15:08:35.893353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.118 [2024-11-20 15:08:35.898067] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.118 [2024-11-20 15:08:35.898380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.118 [2024-11-20 15:08:35.898412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.118 [2024-11-20 15:08:35.903148] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.118 [2024-11-20 15:08:35.903594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.118 [2024-11-20 15:08:35.903629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.118 [2024-11-20 15:08:35.908369] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.118 [2024-11-20 15:08:35.908695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.118 [2024-11-20 15:08:35.908736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.118 [2024-11-20 15:08:35.913552] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.118 [2024-11-20 15:08:35.913874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.118 [2024-11-20 15:08:35.913908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.378 [2024-11-20 15:08:35.918910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.378 [2024-11-20 15:08:35.919231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.378 [2024-11-20 15:08:35.919318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.378 [2024-11-20 15:08:35.924131] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.378 [2024-11-20 15:08:35.924603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.378 [2024-11-20 15:08:35.924791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.378 [2024-11-20 15:08:35.929727] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.378 [2024-11-20 15:08:35.930183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.378 [2024-11-20 15:08:35.930373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.378 [2024-11-20 15:08:35.935298] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.378 [2024-11-20 15:08:35.935764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.378 [2024-11-20 15:08:35.936011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.378 [2024-11-20 15:08:35.940925] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.378 [2024-11-20 15:08:35.941380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.378 [2024-11-20 15:08:35.941569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.378 [2024-11-20 15:08:35.946619] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.378 [2024-11-20 15:08:35.947103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.378 [2024-11-20 15:08:35.947323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.378 [2024-11-20 15:08:35.952349] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.378 [2024-11-20 15:08:35.952826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.378 [2024-11-20 15:08:35.953073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.378 [2024-11-20 15:08:35.958079] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.378 [2024-11-20 15:08:35.958539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.378 [2024-11-20 15:08:35.958784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.378 [2024-11-20 15:08:35.963705] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.378 [2024-11-20 15:08:35.964022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.378 [2024-11-20 15:08:35.964060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.378 [2024-11-20 15:08:35.968756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.378 [2024-11-20 15:08:35.969068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.378 [2024-11-20 15:08:35.969111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.378 [2024-11-20 15:08:35.973941] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.378 [2024-11-20 15:08:35.974260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.378 [2024-11-20 15:08:35.974351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.378 [2024-11-20 15:08:35.978971] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.378 [2024-11-20 15:08:35.979528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.379 [2024-11-20 15:08:35.979823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.379 [2024-11-20 15:08:35.984679] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.379 [2024-11-20 15:08:35.985221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.379 [2024-11-20 15:08:35.985432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.379 [2024-11-20 15:08:35.990548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.379 [2024-11-20 15:08:35.991019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.379 [2024-11-20 15:08:35.991238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.379 [2024-11-20 15:08:35.996567] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.379 [2024-11-20 15:08:35.997055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.379 [2024-11-20 15:08:35.997331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.379 [2024-11-20 15:08:36.002673] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.379 [2024-11-20 15:08:36.003129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.379 [2024-11-20 15:08:36.003326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.379 [2024-11-20 15:08:36.008510] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.379 [2024-11-20 15:08:36.008988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.379 [2024-11-20 15:08:36.009183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.379 [2024-11-20 15:08:36.014218] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.379 [2024-11-20 15:08:36.014785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.379 [2024-11-20 15:08:36.014832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.379 [2024-11-20 15:08:36.019686] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.379 [2024-11-20 15:08:36.020001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.379 [2024-11-20 15:08:36.020035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.379 [2024-11-20 15:08:36.024849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.379 [2024-11-20 15:08:36.025171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.379 [2024-11-20 15:08:36.025219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.379 [2024-11-20 15:08:36.029949] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.379 [2024-11-20 15:08:36.030277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.379 [2024-11-20 15:08:36.030311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.379 [2024-11-20 15:08:36.035135] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.379 [2024-11-20 15:08:36.035482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.379 [2024-11-20 15:08:36.035514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.379 [2024-11-20 15:08:36.040496] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.379 [2024-11-20 15:08:36.040826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.379 [2024-11-20 15:08:36.040862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.379 [2024-11-20 15:08:36.045587] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.379 [2024-11-20 15:08:36.046040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.379 [2024-11-20 15:08:36.046075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.379 [2024-11-20 15:08:36.050819] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.379 [2024-11-20 15:08:36.051133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.379 [2024-11-20 15:08:36.051164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.379 [2024-11-20 15:08:36.055910] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.379 [2024-11-20 15:08:36.056221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.379 [2024-11-20 15:08:36.056252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.379 [2024-11-20 15:08:36.061022] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.379 [2024-11-20 15:08:36.061339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.379 [2024-11-20 15:08:36.061370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.379 [2024-11-20 15:08:36.066632] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.379 [2024-11-20 15:08:36.066964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.379 [2024-11-20 15:08:36.066995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.379 [2024-11-20 15:08:36.072325] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.379 [2024-11-20 15:08:36.072654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.379 [2024-11-20 15:08:36.072684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.379 [2024-11-20 15:08:36.077826] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.379 [2024-11-20 15:08:36.078151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.379 [2024-11-20 15:08:36.078181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.379 [2024-11-20 15:08:36.083256] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.379 [2024-11-20 15:08:36.083587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.379 [2024-11-20 15:08:36.083618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.379 [2024-11-20 15:08:36.088470] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.379 [2024-11-20 15:08:36.088801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.379 [2024-11-20 15:08:36.088832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.379 [2024-11-20 15:08:36.093731] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.379 [2024-11-20 15:08:36.094041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.379 [2024-11-20 15:08:36.094071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.379 [2024-11-20 15:08:36.098898] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.379 [2024-11-20 15:08:36.099220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.380 [2024-11-20 15:08:36.099249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.380 [2024-11-20 15:08:36.104144] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.380 [2024-11-20 15:08:36.104454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.380 [2024-11-20 15:08:36.104484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.380 [2024-11-20 15:08:36.109248] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.380 [2024-11-20 15:08:36.109711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.380 [2024-11-20 15:08:36.109747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.380 [2024-11-20 15:08:36.114505] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.380 [2024-11-20 15:08:36.114826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.380 [2024-11-20 15:08:36.114857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.380 [2024-11-20 15:08:36.119571] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.380 [2024-11-20 15:08:36.119899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.380 [2024-11-20 15:08:36.119930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.380 [2024-11-20 15:08:36.124635] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.380 [2024-11-20 15:08:36.124958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.380 [2024-11-20 15:08:36.125000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.380 [2024-11-20 15:08:36.129707] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.380 [2024-11-20 15:08:36.130020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.380 [2024-11-20 15:08:36.130050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.380 [2024-11-20 15:08:36.134801] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.380 [2024-11-20 15:08:36.135108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.380 [2024-11-20 15:08:36.135150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.380 [2024-11-20 15:08:36.139862] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.380 [2024-11-20 15:08:36.140173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.380 [2024-11-20 15:08:36.140203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.380 [2024-11-20 15:08:36.144954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.380 [2024-11-20 15:08:36.145260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.380 [2024-11-20 15:08:36.145290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.380 [2024-11-20 15:08:36.150111] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.380 [2024-11-20 15:08:36.150423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.380 [2024-11-20 15:08:36.150454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.380 [2024-11-20 15:08:36.155242] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.380 [2024-11-20 15:08:36.155549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.380 [2024-11-20 15:08:36.155579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.380 [2024-11-20 15:08:36.160302] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.380 [2024-11-20 15:08:36.160754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.380 [2024-11-20 15:08:36.160791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.380 [2024-11-20 15:08:36.165520] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.380 [2024-11-20 15:08:36.165844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.380 [2024-11-20 15:08:36.165880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.380 [2024-11-20 15:08:36.170590] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.380 [2024-11-20 15:08:36.170913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.380 [2024-11-20 15:08:36.170948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.380 [2024-11-20 15:08:36.175663] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.380 [2024-11-20 15:08:36.175970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.380 [2024-11-20 15:08:36.176002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.640 [2024-11-20 15:08:36.180954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.640 [2024-11-20 15:08:36.181263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.640 [2024-11-20 15:08:36.181293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.640 [2024-11-20 15:08:36.186428] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.640 [2024-11-20 15:08:36.186767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.641 [2024-11-20 15:08:36.186797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.641 [2024-11-20 15:08:36.191708] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.641 [2024-11-20 15:08:36.192018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.641 [2024-11-20 15:08:36.192049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.641 [2024-11-20 15:08:36.197159] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.641 [2024-11-20 15:08:36.197482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.641 [2024-11-20 15:08:36.197512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.641 [2024-11-20 15:08:36.202308] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.641 [2024-11-20 15:08:36.202625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.641 [2024-11-20 15:08:36.202668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.641 [2024-11-20 15:08:36.207522] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.641 [2024-11-20 15:08:36.207850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.641 [2024-11-20 15:08:36.207885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.641 [2024-11-20 15:08:36.212581] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.641 [2024-11-20 15:08:36.213068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.641 [2024-11-20 15:08:36.213103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.641 [2024-11-20 15:08:36.218451] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.641 [2024-11-20 15:08:36.218777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.641 [2024-11-20 15:08:36.218807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.641 [2024-11-20 15:08:36.223511] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.641 [2024-11-20 15:08:36.223835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.641 [2024-11-20 15:08:36.223866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.641 [2024-11-20 15:08:36.228556] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.641 [2024-11-20 15:08:36.228998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.641 [2024-11-20 15:08:36.229034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.641 [2024-11-20 15:08:36.233773] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.641 [2024-11-20 15:08:36.234083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.641 [2024-11-20 15:08:36.234113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.641 [2024-11-20 15:08:36.238868] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.641 [2024-11-20 15:08:36.239175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.641 [2024-11-20 15:08:36.239213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.641 [2024-11-20 15:08:36.243896] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.641 [2024-11-20 15:08:36.244208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.641 [2024-11-20 15:08:36.244239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.641 [2024-11-20 15:08:36.248990] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.641 [2024-11-20 15:08:36.249298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.641 [2024-11-20 15:08:36.249328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.641 [2024-11-20 15:08:36.254335] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.641 [2024-11-20 15:08:36.254655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.641 [2024-11-20 15:08:36.254685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.641 [2024-11-20 15:08:36.259806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.641 [2024-11-20 15:08:36.260118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.641 [2024-11-20 15:08:36.260149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.641 [2024-11-20 15:08:36.265013] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.641 [2024-11-20 15:08:36.265333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.641 [2024-11-20 15:08:36.265375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.641 [2024-11-20 15:08:36.270078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.641 [2024-11-20 15:08:36.270399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.641 [2024-11-20 15:08:36.270434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.641 [2024-11-20 15:08:36.275209] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.641 [2024-11-20 15:08:36.275668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.641 [2024-11-20 15:08:36.275704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.641 [2024-11-20 15:08:36.280423] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.641 [2024-11-20 15:08:36.280750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.641 [2024-11-20 15:08:36.280786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.641 [2024-11-20 15:08:36.285457] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.641 [2024-11-20 15:08:36.285782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.641 [2024-11-20 15:08:36.285818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.641 [2024-11-20 15:08:36.290454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.641 [2024-11-20 15:08:36.290776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.641 [2024-11-20 15:08:36.290811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.641 [2024-11-20 15:08:36.295583] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.641 [2024-11-20 15:08:36.296030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.641 [2024-11-20 15:08:36.296055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.641 [2024-11-20 15:08:36.300756] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.641 [2024-11-20 15:08:36.301072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.642 [2024-11-20 15:08:36.301102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.642 [2024-11-20 15:08:36.305811] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.642 [2024-11-20 15:08:36.306123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.642 [2024-11-20 15:08:36.306154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.642 [2024-11-20 15:08:36.310879] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.642 [2024-11-20 15:08:36.311187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.642 [2024-11-20 15:08:36.311231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.642 [2024-11-20 15:08:36.315973] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.642 [2024-11-20 15:08:36.316279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.642 [2024-11-20 15:08:36.316310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.642 [2024-11-20 15:08:36.321003] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.642 [2024-11-20 15:08:36.321308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.642 [2024-11-20 15:08:36.321339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.642 [2024-11-20 15:08:36.326049] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.642 [2024-11-20 15:08:36.326483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.642 [2024-11-20 15:08:36.326518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.642 [2024-11-20 15:08:36.331271] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.642 [2024-11-20 15:08:36.331583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.642 [2024-11-20 15:08:36.331614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.642 [2024-11-20 15:08:36.336341] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.642 [2024-11-20 15:08:36.336664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.642 [2024-11-20 15:08:36.336694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.642 [2024-11-20 15:08:36.341373] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.642 [2024-11-20 15:08:36.341813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.642 [2024-11-20 15:08:36.341848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.642 [2024-11-20 15:08:36.346586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.642 [2024-11-20 15:08:36.346914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.642 [2024-11-20 15:08:36.346950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.642 [2024-11-20 15:08:36.351698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.642 [2024-11-20 15:08:36.352006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.642 [2024-11-20 15:08:36.352037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.642 [2024-11-20 15:08:36.356787] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.642 [2024-11-20 15:08:36.357094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.642 [2024-11-20 15:08:36.357125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.642 [2024-11-20 15:08:36.361803] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.642 [2024-11-20 15:08:36.362110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.642 [2024-11-20 15:08:36.362140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.642 [2024-11-20 15:08:36.366849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.642 [2024-11-20 15:08:36.367156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.642 [2024-11-20 15:08:36.367186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.642 [2024-11-20 15:08:36.371939] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.642 [2024-11-20 15:08:36.372251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.642 [2024-11-20 15:08:36.372283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.642 [2024-11-20 15:08:36.377022] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.642 [2024-11-20 15:08:36.377331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.642 [2024-11-20 15:08:36.377361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.642 [2024-11-20 15:08:36.382287] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.642 [2024-11-20 15:08:36.382596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.642 [2024-11-20 15:08:36.382626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.642 [2024-11-20 15:08:36.387607] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.642 [2024-11-20 15:08:36.387925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.642 [2024-11-20 15:08:36.387956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.642 [2024-11-20 15:08:36.393331] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.642 [2024-11-20 15:08:36.393809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.642 [2024-11-20 15:08:36.393845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.642 [2024-11-20 15:08:36.398615] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.642 [2024-11-20 15:08:36.398951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.642 [2024-11-20 15:08:36.398993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.642 [2024-11-20 15:08:36.403785] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.642 [2024-11-20 15:08:36.404099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.642 [2024-11-20 15:08:36.404130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.642 [2024-11-20 15:08:36.408897] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.642 [2024-11-20 15:08:36.409205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.642 [2024-11-20 15:08:36.409236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.642 [2024-11-20 15:08:36.414007] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.643 [2024-11-20 15:08:36.414313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.643 [2024-11-20 15:08:36.414342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.643 [2024-11-20 15:08:36.419078] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.643 [2024-11-20 15:08:36.419396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.643 [2024-11-20 15:08:36.419426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.643 [2024-11-20 15:08:36.424164] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.643 [2024-11-20 15:08:36.424595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.643 [2024-11-20 15:08:36.424630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.643 [2024-11-20 15:08:36.429401] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.643 [2024-11-20 15:08:36.429724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.643 [2024-11-20 15:08:36.429755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.643 [2024-11-20 15:08:36.434469] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.643 [2024-11-20 15:08:36.434789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.643 [2024-11-20 15:08:36.434819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.643 [2024-11-20 15:08:36.439605] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.643 [2024-11-20 15:08:36.439940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.643 [2024-11-20 15:08:36.439975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.903 [2024-11-20 15:08:36.444935] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.903 [2024-11-20 15:08:36.445247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.903 [2024-11-20 15:08:36.445277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.903 [2024-11-20 15:08:36.450145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.903 [2024-11-20 15:08:36.450461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.903 [2024-11-20 15:08:36.450491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.903 [2024-11-20 15:08:36.455219] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.903 [2024-11-20 15:08:36.455525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.904 [2024-11-20 15:08:36.455555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.904 [2024-11-20 15:08:36.460291] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.904 [2024-11-20 15:08:36.460735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.904 [2024-11-20 15:08:36.460770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.904 [2024-11-20 15:08:36.465490] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.904 [2024-11-20 15:08:36.465813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.904 [2024-11-20 15:08:36.465843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.904 [2024-11-20 15:08:36.470542] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.904 [2024-11-20 15:08:36.470865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.904 [2024-11-20 15:08:36.470901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.904 [2024-11-20 15:08:36.475615] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.904 [2024-11-20 15:08:36.475939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.904 [2024-11-20 15:08:36.475972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.904 [2024-11-20 15:08:36.480697] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.904 [2024-11-20 15:08:36.481032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.904 [2024-11-20 15:08:36.481077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.904 [2024-11-20 15:08:36.485755] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.904 [2024-11-20 15:08:36.486065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.904 [2024-11-20 15:08:36.486099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.904 [2024-11-20 15:08:36.490849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.904 [2024-11-20 15:08:36.491160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.904 [2024-11-20 15:08:36.491202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.904 [2024-11-20 15:08:36.495965] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.904 [2024-11-20 15:08:36.496283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.904 [2024-11-20 15:08:36.496315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.904 [2024-11-20 15:08:36.501044] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.904 [2024-11-20 15:08:36.501360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.904 [2024-11-20 15:08:36.501393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.904 [2024-11-20 15:08:36.506130] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.904 [2024-11-20 15:08:36.506441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.904 [2024-11-20 15:08:36.506483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.904 [2024-11-20 15:08:36.511412] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.904 [2024-11-20 15:08:36.511869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.904 [2024-11-20 15:08:36.511904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.904 [2024-11-20 15:08:36.517065] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.904 [2024-11-20 15:08:36.517375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.904 [2024-11-20 15:08:36.517406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.904 [2024-11-20 15:08:36.522262] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.904 [2024-11-20 15:08:36.522570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.904 [2024-11-20 15:08:36.522610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.904 [2024-11-20 15:08:36.527357] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.904 [2024-11-20 15:08:36.527804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.904 [2024-11-20 15:08:36.527839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.904 [2024-11-20 15:08:36.532582] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.904 [2024-11-20 15:08:36.532903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.904 [2024-11-20 15:08:36.532944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.904 [2024-11-20 15:08:36.537766] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.904 [2024-11-20 15:08:36.538095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.904 [2024-11-20 15:08:36.538135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.904 [2024-11-20 15:08:36.543586] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.904 [2024-11-20 15:08:36.544060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.904 [2024-11-20 15:08:36.544094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.904 [2024-11-20 15:08:36.548962] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.904 [2024-11-20 15:08:36.549275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.904 [2024-11-20 15:08:36.549308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.904 [2024-11-20 15:08:36.554416] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.904 [2024-11-20 15:08:36.554753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.904 [2024-11-20 15:08:36.554793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.904 [2024-11-20 15:08:36.559608] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.904 [2024-11-20 15:08:36.560068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.904 [2024-11-20 15:08:36.560103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.904 [2024-11-20 15:08:36.564988] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.904 [2024-11-20 15:08:36.565317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.904 [2024-11-20 15:08:36.565358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.904 [2024-11-20 15:08:36.570880] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.904 [2024-11-20 15:08:36.571205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.904 [2024-11-20 15:08:36.571265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.905 [2024-11-20 15:08:36.576215] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.905 [2024-11-20 15:08:36.576523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.905 [2024-11-20 15:08:36.576613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.905 [2024-11-20 15:08:36.581444] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.905 [2024-11-20 15:08:36.581769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.905 [2024-11-20 15:08:36.581801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.905 [2024-11-20 15:08:36.586536] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.905 [2024-11-20 15:08:36.586989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.905 [2024-11-20 15:08:36.587023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.905 [2024-11-20 15:08:36.591779] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.905 [2024-11-20 15:08:36.592091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.905 [2024-11-20 15:08:36.592177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.905 [2024-11-20 15:08:36.596947] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.905 [2024-11-20 15:08:36.597256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.905 [2024-11-20 15:08:36.597296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.905 [2024-11-20 15:08:36.602000] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.905 [2024-11-20 15:08:36.602311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.905 [2024-11-20 15:08:36.602353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.905 [2024-11-20 15:08:36.607003] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.905 [2024-11-20 15:08:36.607322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.905 [2024-11-20 15:08:36.607408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.905 [2024-11-20 15:08:36.612123] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.905 [2024-11-20 15:08:36.612431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.905 [2024-11-20 15:08:36.612462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.905 [2024-11-20 15:08:36.617206] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.905 [2024-11-20 15:08:36.617662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.905 [2024-11-20 15:08:36.617695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.905 [2024-11-20 15:08:36.622388] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.905 [2024-11-20 15:08:36.622710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.905 [2024-11-20 15:08:36.622750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.905 [2024-11-20 15:08:36.627484] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.905 [2024-11-20 15:08:36.627805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.905 [2024-11-20 15:08:36.627838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.905 [2024-11-20 15:08:36.632500] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.905 [2024-11-20 15:08:36.632822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.905 [2024-11-20 15:08:36.632853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.905 [2024-11-20 15:08:36.637599] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.905 [2024-11-20 15:08:36.638083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.905 [2024-11-20 15:08:36.638117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.905 [2024-11-20 15:08:36.643099] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.905 [2024-11-20 15:08:36.643422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.905 [2024-11-20 15:08:36.643463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.905 [2024-11-20 15:08:36.648272] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.905 [2024-11-20 15:08:36.648580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.905 [2024-11-20 15:08:36.648611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.905 [2024-11-20 15:08:36.653318] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.905 [2024-11-20 15:08:36.653774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.905 [2024-11-20 15:08:36.653807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.905 [2024-11-20 15:08:36.658525] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.905 [2024-11-20 15:08:36.658853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.905 [2024-11-20 15:08:36.658892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.905 [2024-11-20 15:08:36.663704] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.905 [2024-11-20 15:08:36.664014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.905 [2024-11-20 15:08:36.664053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.905 [2024-11-20 15:08:36.669002] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.905 [2024-11-20 15:08:36.669311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.905 [2024-11-20 15:08:36.669351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.905 [2024-11-20 15:08:36.674066] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.905 [2024-11-20 15:08:36.674372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.905 [2024-11-20 15:08:36.674402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.905 [2024-11-20 15:08:36.679147] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.905 [2024-11-20 15:08:36.679476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.905 [2024-11-20 15:08:36.679517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.905 [2024-11-20 15:08:36.684239] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.906 [2024-11-20 15:08:36.684684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.906 [2024-11-20 15:08:36.684714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:05.906 [2024-11-20 15:08:36.689399] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.906 [2024-11-20 15:08:36.689725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.906 [2024-11-20 15:08:36.689765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:05.906 [2024-11-20 15:08:36.694508] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.906 [2024-11-20 15:08:36.694836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.906 [2024-11-20 15:08:36.694876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.906 [2024-11-20 15:08:36.699553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:05.906 [2024-11-20 15:08:36.699876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:05.906 [2024-11-20 15:08:36.699910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:05.906 [2024-11-20 15:08:36.704850] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.165 [2024-11-20 15:08:36.705161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.165 [2024-11-20 15:08:36.705258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.165 [2024-11-20 15:08:36.710052] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.165 [2024-11-20 15:08:36.710510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.165 [2024-11-20 15:08:36.710700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.165 [2024-11-20 15:08:36.715745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.165 [2024-11-20 15:08:36.716201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.166 [2024-11-20 15:08:36.716391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.166 [2024-11-20 15:08:36.721326] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.166 [2024-11-20 15:08:36.721862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.166 [2024-11-20 15:08:36.722099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.166 [2024-11-20 15:08:36.727109] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.166 [2024-11-20 15:08:36.727573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.166 [2024-11-20 15:08:36.727795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.166 [2024-11-20 15:08:36.732628] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.166 [2024-11-20 15:08:36.733092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.166 [2024-11-20 15:08:36.733299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.166 [2024-11-20 15:08:36.738143] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.166 [2024-11-20 15:08:36.738607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.166 [2024-11-20 15:08:36.738811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.166 [2024-11-20 15:08:36.743844] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.166 [2024-11-20 15:08:36.744299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.166 [2024-11-20 15:08:36.744345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.166 [2024-11-20 15:08:36.749548] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.166 [2024-11-20 15:08:36.749880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.166 [2024-11-20 15:08:36.749917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.166 [2024-11-20 15:08:36.754849] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.166 [2024-11-20 15:08:36.755158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.166 [2024-11-20 15:08:36.755189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.166 [2024-11-20 15:08:36.759954] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.166 [2024-11-20 15:08:36.760261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.166 [2024-11-20 15:08:36.760292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.166 [2024-11-20 15:08:36.764997] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.166 [2024-11-20 15:08:36.765304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.166 [2024-11-20 15:08:36.765334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.166 [2024-11-20 15:08:36.770127] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.166 [2024-11-20 15:08:36.770438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.166 [2024-11-20 15:08:36.770468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.166 [2024-11-20 15:08:36.775415] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.166 [2024-11-20 15:08:36.775890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.166 [2024-11-20 15:08:36.775926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.166 [2024-11-20 15:08:36.780622] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.166 [2024-11-20 15:08:36.780954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.166 [2024-11-20 15:08:36.780984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.166 [2024-11-20 15:08:36.785735] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.166 [2024-11-20 15:08:36.786074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.166 [2024-11-20 15:08:36.786116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.166 [2024-11-20 15:08:36.790988] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.166 [2024-11-20 15:08:36.791449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.166 [2024-11-20 15:08:36.791484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.166 [2024-11-20 15:08:36.796304] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.166 [2024-11-20 15:08:36.796613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.166 [2024-11-20 15:08:36.796653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.166 [2024-11-20 15:08:36.801429] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.166 [2024-11-20 15:08:36.801760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.166 [2024-11-20 15:08:36.801799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.166 [2024-11-20 15:08:36.806553] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.166 [2024-11-20 15:08:36.807000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.166 [2024-11-20 15:08:36.807034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.166 [2024-11-20 15:08:36.811814] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.166 [2024-11-20 15:08:36.812125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.166 [2024-11-20 15:08:36.812149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.166 [2024-11-20 15:08:36.816857] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.166 [2024-11-20 15:08:36.817169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.166 [2024-11-20 15:08:36.817198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.166 [2024-11-20 15:08:36.822097] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.166 [2024-11-20 15:08:36.822526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.166 [2024-11-20 15:08:36.822561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.167 [2024-11-20 15:08:36.827569] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.167 [2024-11-20 15:08:36.827910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.167 [2024-11-20 15:08:36.827940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.167 [2024-11-20 15:08:36.832687] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.167 [2024-11-20 15:08:36.832992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.167 [2024-11-20 15:08:36.833023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.167 [2024-11-20 15:08:36.837871] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.167 [2024-11-20 15:08:36.838181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.167 [2024-11-20 15:08:36.838211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.167 [2024-11-20 15:08:36.842970] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.167 [2024-11-20 15:08:36.843300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.167 [2024-11-20 15:08:36.843330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.167 [2024-11-20 15:08:36.848125] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.167 [2024-11-20 15:08:36.848437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.167 [2024-11-20 15:08:36.848467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.167 [2024-11-20 15:08:36.853176] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.167 [2024-11-20 15:08:36.853607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.167 [2024-11-20 15:08:36.853655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.167 [2024-11-20 15:08:36.858422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.167 [2024-11-20 15:08:36.858748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.167 [2024-11-20 15:08:36.858778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.167 [2024-11-20 15:08:36.863531] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.167 [2024-11-20 15:08:36.863857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.167 [2024-11-20 15:08:36.863889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.167 [2024-11-20 15:08:36.868566] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.167 [2024-11-20 15:08:36.868890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.167 [2024-11-20 15:08:36.868914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.167 [2024-11-20 15:08:36.873698] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.167 [2024-11-20 15:08:36.874018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.167 [2024-11-20 15:08:36.874058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.167 [2024-11-20 15:08:36.878831] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.167 [2024-11-20 15:08:36.879141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.167 [2024-11-20 15:08:36.879244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.167 [2024-11-20 15:08:36.883983] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.167 [2024-11-20 15:08:36.884289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.167 [2024-11-20 15:08:36.884319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.167 [2024-11-20 15:08:36.889090] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.167 [2024-11-20 15:08:36.889532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.167 [2024-11-20 15:08:36.889567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.167 [2024-11-20 15:08:36.894758] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.167 [2024-11-20 15:08:36.895072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.167 [2024-11-20 15:08:36.895105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.167 [2024-11-20 15:08:36.900858] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.167 [2024-11-20 15:08:36.901173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.167 [2024-11-20 15:08:36.901218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.167 [2024-11-20 15:08:36.906779] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.167 [2024-11-20 15:08:36.907091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.167 [2024-11-20 15:08:36.907120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.167 [2024-11-20 15:08:36.911935] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.167 [2024-11-20 15:08:36.912249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.167 [2024-11-20 15:08:36.912286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.167 [2024-11-20 15:08:36.917911] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.167 [2024-11-20 15:08:36.918245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.167 [2024-11-20 15:08:36.918275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.167 [2024-11-20 15:08:36.923430] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.167 [2024-11-20 15:08:36.923753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.167 [2024-11-20 15:08:36.923783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.167 [2024-11-20 15:08:36.928526] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.167 [2024-11-20 15:08:36.928851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.167 [2024-11-20 15:08:36.928883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.167 [2024-11-20 15:08:36.933666] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.167 [2024-11-20 15:08:36.933979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.167 [2024-11-20 15:08:36.934009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.167 [2024-11-20 15:08:36.938680] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.167 [2024-11-20 15:08:36.938989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.167 [2024-11-20 15:08:36.939019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.168 [2024-11-20 15:08:36.943703] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.168 [2024-11-20 15:08:36.944011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.168 [2024-11-20 15:08:36.944041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.168 [2024-11-20 15:08:36.948711] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.168 [2024-11-20 15:08:36.949027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.168 [2024-11-20 15:08:36.949057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.168 [2024-11-20 15:08:36.953806] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.168 [2024-11-20 15:08:36.954122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.168 [2024-11-20 15:08:36.954152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.168 [2024-11-20 15:08:36.958845] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.168 [2024-11-20 15:08:36.959165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.168 [2024-11-20 15:08:36.959203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.168 [2024-11-20 15:08:36.963973] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.168 [2024-11-20 15:08:36.964278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.168 [2024-11-20 15:08:36.964308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.429 [2024-11-20 15:08:36.969261] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.429 [2024-11-20 15:08:36.969763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.429 [2024-11-20 15:08:36.969798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.429 [2024-11-20 15:08:36.974725] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.429 [2024-11-20 15:08:36.975038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.429 [2024-11-20 15:08:36.975068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.429 [2024-11-20 15:08:36.979828] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.429 [2024-11-20 15:08:36.980140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.429 [2024-11-20 15:08:36.980170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.429 [2024-11-20 15:08:36.984882] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.429 [2024-11-20 15:08:36.985192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.429 [2024-11-20 15:08:36.985222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.429 [2024-11-20 15:08:36.989935] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.429 [2024-11-20 15:08:36.990246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.429 [2024-11-20 15:08:36.990276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.429 [2024-11-20 15:08:36.995016] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.429 [2024-11-20 15:08:36.995354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.429 [2024-11-20 15:08:36.995389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.429 [2024-11-20 15:08:37.000126] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.429 [2024-11-20 15:08:37.000575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.429 [2024-11-20 15:08:37.000612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.429 [2024-11-20 15:08:37.005367] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.429 [2024-11-20 15:08:37.005699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.429 [2024-11-20 15:08:37.005731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.429 [2024-11-20 15:08:37.010454] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.429 [2024-11-20 15:08:37.010780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.429 [2024-11-20 15:08:37.010822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.429 [2024-11-20 15:08:37.015603] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.429 [2024-11-20 15:08:37.015931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.429 [2024-11-20 15:08:37.015966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.429 [2024-11-20 15:08:37.020670] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.429 [2024-11-20 15:08:37.020991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.429 [2024-11-20 15:08:37.021033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.429 [2024-11-20 15:08:37.025789] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.429 [2024-11-20 15:08:37.026098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.429 [2024-11-20 15:08:37.026192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.429 [2024-11-20 15:08:37.031216] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.430 [2024-11-20 15:08:37.031526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.430 [2024-11-20 15:08:37.031566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.430 [2024-11-20 15:08:37.036476] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.430 [2024-11-20 15:08:37.036935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.430 [2024-11-20 15:08:37.036970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.430 [2024-11-20 15:08:37.041726] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.430 [2024-11-20 15:08:37.042036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.430 [2024-11-20 15:08:37.042073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.430 [2024-11-20 15:08:37.046784] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.430 [2024-11-20 15:08:37.047096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.430 [2024-11-20 15:08:37.047137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.430 [2024-11-20 15:08:37.051906] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.430 [2024-11-20 15:08:37.052215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.430 [2024-11-20 15:08:37.052255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.430 [2024-11-20 15:08:37.056967] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.430 [2024-11-20 15:08:37.057273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.430 [2024-11-20 15:08:37.057359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.430 [2024-11-20 15:08:37.062053] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.430 [2024-11-20 15:08:37.062511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.430 [2024-11-20 15:08:37.062749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.430 [2024-11-20 15:08:37.067653] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.430 [2024-11-20 15:08:37.068102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.430 [2024-11-20 15:08:37.068273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.430 [2024-11-20 15:08:37.073267] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.430 [2024-11-20 15:08:37.073732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.430 [2024-11-20 15:08:37.073901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.430 [2024-11-20 15:08:37.078724] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.430 [2024-11-20 15:08:37.079189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.430 [2024-11-20 15:08:37.079397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.430 [2024-11-20 15:08:37.084342] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.430 [2024-11-20 15:08:37.084882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.430 [2024-11-20 15:08:37.085067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.430 [2024-11-20 15:08:37.090129] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.430 [2024-11-20 15:08:37.090587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.430 [2024-11-20 15:08:37.090815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.430 [2024-11-20 15:08:37.097256] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.430 [2024-11-20 15:08:37.097804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.430 [2024-11-20 15:08:37.098003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.430 [2024-11-20 15:08:37.103683] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.430 [2024-11-20 15:08:37.104120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.430 [2024-11-20 15:08:37.104157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.430 [2024-11-20 15:08:37.109140] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.430 [2024-11-20 15:08:37.109591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.430 [2024-11-20 15:08:37.109836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.430 [2024-11-20 15:08:37.114826] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.430 [2024-11-20 15:08:37.115295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.430 [2024-11-20 15:08:37.115465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.430 [2024-11-20 15:08:37.120274] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.430 [2024-11-20 15:08:37.120716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.430 [2024-11-20 15:08:37.120751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.430 [2024-11-20 15:08:37.125559] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.430 [2024-11-20 15:08:37.125883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.430 [2024-11-20 15:08:37.125918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.430 [2024-11-20 15:08:37.130745] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.430 [2024-11-20 15:08:37.131059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.430 [2024-11-20 15:08:37.131089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.430 [2024-11-20 15:08:37.135930] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.430 [2024-11-20 15:08:37.136239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.430 [2024-11-20 15:08:37.136280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.430 [2024-11-20 15:08:37.141020] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.430 [2024-11-20 15:08:37.141332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.430 [2024-11-20 15:08:37.141371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.430 [2024-11-20 15:08:37.146086] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.430 [2024-11-20 15:08:37.146396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.430 [2024-11-20 15:08:37.146481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.431 [2024-11-20 15:08:37.151257] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.431 [2024-11-20 15:08:37.151728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.431 [2024-11-20 15:08:37.151902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.431 [2024-11-20 15:08:37.156900] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.431 [2024-11-20 15:08:37.157359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.431 [2024-11-20 15:08:37.157556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.431 [2024-11-20 15:08:37.162693] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.431 [2024-11-20 15:08:37.163164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.431 [2024-11-20 15:08:37.163353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.431 [2024-11-20 15:08:37.168342] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.431 [2024-11-20 15:08:37.168825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.431 [2024-11-20 15:08:37.169074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.431 [2024-11-20 15:08:37.174145] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.431 [2024-11-20 15:08:37.174623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.431 [2024-11-20 15:08:37.174817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.431 [2024-11-20 15:08:37.179984] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.431 [2024-11-20 15:08:37.180478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.431 [2024-11-20 15:08:37.180690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.431 [2024-11-20 15:08:37.185684] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.431 [2024-11-20 15:08:37.186135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.431 [2024-11-20 15:08:37.186353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.431 [2024-11-20 15:08:37.191498] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.431 [2024-11-20 15:08:37.191971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.431 [2024-11-20 15:08:37.192142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.431 [2024-11-20 15:08:37.197346] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.431 [2024-11-20 15:08:37.197830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.431 [2024-11-20 15:08:37.198039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.431 [2024-11-20 15:08:37.203178] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.431 [2024-11-20 15:08:37.203676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.431 [2024-11-20 15:08:37.203878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.431 [2024-11-20 15:08:37.208901] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.431 [2024-11-20 15:08:37.209361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.431 [2024-11-20 15:08:37.209544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.431 [2024-11-20 15:08:37.214481] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.431 [2024-11-20 15:08:37.214952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.431 [2024-11-20 15:08:37.214994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.431 [2024-11-20 15:08:37.219736] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.431 [2024-11-20 15:08:37.220049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.431 [2024-11-20 15:08:37.220082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.431 [2024-11-20 15:08:37.224823] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.431 [2024-11-20 15:08:37.225132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.431 [2024-11-20 15:08:37.225173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.431 [2024-11-20 15:08:37.230101] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.690 [2024-11-20 15:08:37.230425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.690 [2024-11-20 15:08:37.230517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.690 [2024-11-20 15:08:37.235466] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.690 [2024-11-20 15:08:37.235915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.690 [2024-11-20 15:08:37.235953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.690 [2024-11-20 15:08:37.240839] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.690 [2024-11-20 15:08:37.241151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.690 [2024-11-20 15:08:37.241246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.690 [2024-11-20 15:08:37.246012] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.690 [2024-11-20 15:08:37.246476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.690 [2024-11-20 15:08:37.246672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.690 [2024-11-20 15:08:37.251621] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.691 [2024-11-20 15:08:37.252099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.691 [2024-11-20 15:08:37.252280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.691 [2024-11-20 15:08:37.257195] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.691 [2024-11-20 15:08:37.257664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.691 [2024-11-20 15:08:37.257833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.691 [2024-11-20 15:08:37.262747] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.691 [2024-11-20 15:08:37.263209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.691 [2024-11-20 15:08:37.263391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.691 [2024-11-20 15:08:37.268422] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.691 [2024-11-20 15:08:37.268885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.691 [2024-11-20 15:08:37.269096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:06.691 [2024-11-20 15:08:37.274009] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.691 [2024-11-20 15:08:37.274458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.691 [2024-11-20 15:08:37.274706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:06.691 [2024-11-20 15:08:37.279702] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.691 [2024-11-20 15:08:37.280152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.691 [2024-11-20 15:08:37.280337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:06.691 [2024-11-20 15:08:37.285252] tcp.c:2036:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1682e30) with pdu=0x2000190fef90 00:18:06.691 [2024-11-20 15:08:37.285693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:06.691 [2024-11-20 15:08:37.285857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:06.691 00:18:06.691 Latency(us) 00:18:06.691 [2024-11-20T15:08:37.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.691 [2024-11-20T15:08:37.495Z] Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:18:06.691 nvme0n1 : 2.00 5872.56 734.07 0.00 0.00 2718.72 2159.71 7208.96 00:18:06.691 [2024-11-20T15:08:37.495Z] =================================================================================================================== 00:18:06.691 [2024-11-20T15:08:37.495Z] Total : 5872.56 734.07 0.00 0.00 2718.72 2159.71 7208.96 00:18:06.691 0 00:18:06.691 15:08:37 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:18:06.691 15:08:37 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:18:06.691 15:08:37 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:18:06.691 | .driver_specific 00:18:06.691 | .nvme_error 00:18:06.691 | .status_code 00:18:06.691 | .command_transient_transport_error' 00:18:06.691 15:08:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:18:06.963 15:08:37 -- host/digest.sh@71 -- # (( 379 > 0 )) 00:18:06.963 15:08:37 -- host/digest.sh@73 -- # killprocess 83932 00:18:06.964 15:08:37 -- common/autotest_common.sh@936 -- # '[' -z 83932 ']' 00:18:06.964 15:08:37 -- common/autotest_common.sh@940 -- # kill -0 83932 00:18:06.964 15:08:37 -- common/autotest_common.sh@941 -- # uname 00:18:06.964 15:08:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:06.964 15:08:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83932 00:18:06.964 killing process with pid 83932 00:18:06.964 Received shutdown signal, test time was about 2.000000 seconds 00:18:06.964 00:18:06.964 Latency(us) 00:18:06.964 [2024-11-20T15:08:37.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.964 [2024-11-20T15:08:37.768Z] =================================================================================================================== 00:18:06.964 [2024-11-20T15:08:37.768Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:06.964 15:08:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:06.964 15:08:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:06.964 15:08:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83932' 00:18:06.964 15:08:37 -- common/autotest_common.sh@955 -- # kill 83932 00:18:06.964 15:08:37 -- common/autotest_common.sh@960 -- # wait 83932 00:18:07.250 15:08:37 -- host/digest.sh@115 -- # killprocess 83746 00:18:07.250 15:08:37 -- common/autotest_common.sh@936 -- # '[' -z 83746 ']' 00:18:07.250 15:08:37 -- common/autotest_common.sh@940 -- # kill -0 83746 00:18:07.250 15:08:37 -- common/autotest_common.sh@941 -- # uname 00:18:07.250 15:08:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:07.250 15:08:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83746 00:18:07.250 killing process with pid 83746 00:18:07.250 15:08:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:07.250 15:08:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:07.251 15:08:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83746' 00:18:07.251 15:08:37 -- common/autotest_common.sh@955 -- # kill 83746 00:18:07.251 15:08:37 -- common/autotest_common.sh@960 -- # wait 83746 00:18:07.251 ************************************ 00:18:07.251 END TEST nvmf_digest_error 00:18:07.251 ************************************ 00:18:07.251 00:18:07.251 real 0m16.232s 00:18:07.251 user 0m32.198s 00:18:07.251 sys 0m4.318s 00:18:07.251 15:08:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:07.251 15:08:37 -- common/autotest_common.sh@10 -- # set +x 00:18:07.251 15:08:37 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:18:07.251 15:08:37 -- host/digest.sh@139 -- # nvmftestfini 00:18:07.251 15:08:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:07.251 15:08:37 -- nvmf/common.sh@116 -- # sync 00:18:07.510 15:08:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:07.510 15:08:38 -- nvmf/common.sh@119 -- # set +e 00:18:07.510 15:08:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:07.510 15:08:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:07.510 rmmod nvme_tcp 00:18:07.510 rmmod nvme_fabrics 00:18:07.510 rmmod nvme_keyring 00:18:07.510 15:08:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:07.510 15:08:38 -- nvmf/common.sh@123 -- # set -e 00:18:07.510 15:08:38 -- nvmf/common.sh@124 -- # return 0 00:18:07.510 15:08:38 -- nvmf/common.sh@477 -- # '[' -n 83746 ']' 00:18:07.510 15:08:38 -- nvmf/common.sh@478 -- # killprocess 83746 00:18:07.510 15:08:38 -- common/autotest_common.sh@936 -- # '[' -z 83746 ']' 00:18:07.510 Process with pid 83746 is not found 00:18:07.510 15:08:38 -- common/autotest_common.sh@940 -- # kill -0 83746 00:18:07.510 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (83746) - No such process 00:18:07.510 15:08:38 -- common/autotest_common.sh@963 -- # echo 'Process with pid 83746 is not found' 00:18:07.510 15:08:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:07.510 15:08:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:07.510 15:08:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:07.510 15:08:38 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:07.510 15:08:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:07.510 15:08:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.510 15:08:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.510 15:08:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.510 15:08:38 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:07.510 00:18:07.510 real 0m32.941s 00:18:07.510 user 1m3.993s 00:18:07.510 sys 0m8.955s 00:18:07.510 ************************************ 00:18:07.510 END TEST nvmf_digest 00:18:07.510 ************************************ 00:18:07.510 15:08:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:07.510 15:08:38 -- common/autotest_common.sh@10 -- # set +x 00:18:07.510 15:08:38 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:18:07.510 15:08:38 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:18:07.510 15:08:38 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:07.510 15:08:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:07.510 15:08:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:07.510 15:08:38 -- common/autotest_common.sh@10 -- # set +x 00:18:07.510 ************************************ 00:18:07.510 START TEST nvmf_multipath 00:18:07.510 ************************************ 00:18:07.510 15:08:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:18:07.510 * Looking for test storage... 00:18:07.510 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:07.510 15:08:38 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:07.510 15:08:38 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:07.510 15:08:38 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:07.769 15:08:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:07.769 15:08:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:07.769 15:08:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:07.769 15:08:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:07.769 15:08:38 -- scripts/common.sh@335 -- # IFS=.-: 00:18:07.769 15:08:38 -- scripts/common.sh@335 -- # read -ra ver1 00:18:07.769 15:08:38 -- scripts/common.sh@336 -- # IFS=.-: 00:18:07.769 15:08:38 -- scripts/common.sh@336 -- # read -ra ver2 00:18:07.769 15:08:38 -- scripts/common.sh@337 -- # local 'op=<' 00:18:07.769 15:08:38 -- scripts/common.sh@339 -- # ver1_l=2 00:18:07.769 15:08:38 -- scripts/common.sh@340 -- # ver2_l=1 00:18:07.769 15:08:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:07.769 15:08:38 -- scripts/common.sh@343 -- # case "$op" in 00:18:07.769 15:08:38 -- scripts/common.sh@344 -- # : 1 00:18:07.769 15:08:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:07.769 15:08:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:07.769 15:08:38 -- scripts/common.sh@364 -- # decimal 1 00:18:07.769 15:08:38 -- scripts/common.sh@352 -- # local d=1 00:18:07.769 15:08:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:07.769 15:08:38 -- scripts/common.sh@354 -- # echo 1 00:18:07.769 15:08:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:07.769 15:08:38 -- scripts/common.sh@365 -- # decimal 2 00:18:07.769 15:08:38 -- scripts/common.sh@352 -- # local d=2 00:18:07.769 15:08:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:07.769 15:08:38 -- scripts/common.sh@354 -- # echo 2 00:18:07.769 15:08:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:07.769 15:08:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:07.769 15:08:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:07.769 15:08:38 -- scripts/common.sh@367 -- # return 0 00:18:07.769 15:08:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:07.769 15:08:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:07.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.769 --rc genhtml_branch_coverage=1 00:18:07.769 --rc genhtml_function_coverage=1 00:18:07.769 --rc genhtml_legend=1 00:18:07.769 --rc geninfo_all_blocks=1 00:18:07.769 --rc geninfo_unexecuted_blocks=1 00:18:07.769 00:18:07.769 ' 00:18:07.769 15:08:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:07.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.769 --rc genhtml_branch_coverage=1 00:18:07.769 --rc genhtml_function_coverage=1 00:18:07.769 --rc genhtml_legend=1 00:18:07.769 --rc geninfo_all_blocks=1 00:18:07.769 --rc geninfo_unexecuted_blocks=1 00:18:07.769 00:18:07.769 ' 00:18:07.769 15:08:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:07.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.769 --rc genhtml_branch_coverage=1 00:18:07.769 --rc genhtml_function_coverage=1 00:18:07.769 --rc genhtml_legend=1 00:18:07.769 --rc geninfo_all_blocks=1 00:18:07.769 --rc geninfo_unexecuted_blocks=1 00:18:07.769 00:18:07.769 ' 00:18:07.769 15:08:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:07.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.769 --rc genhtml_branch_coverage=1 00:18:07.769 --rc genhtml_function_coverage=1 00:18:07.769 --rc genhtml_legend=1 00:18:07.769 --rc geninfo_all_blocks=1 00:18:07.769 --rc geninfo_unexecuted_blocks=1 00:18:07.769 00:18:07.769 ' 00:18:07.769 15:08:38 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:07.769 15:08:38 -- nvmf/common.sh@7 -- # uname -s 00:18:07.769 15:08:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.769 15:08:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.769 15:08:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.769 15:08:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.769 15:08:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:07.769 15:08:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:07.769 15:08:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.769 15:08:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:07.769 15:08:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.769 15:08:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:07.769 15:08:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:18:07.769 15:08:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:18:07.769 15:08:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.769 15:08:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:07.769 15:08:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:07.769 15:08:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:07.769 15:08:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.769 15:08:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.769 15:08:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.769 15:08:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.769 15:08:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.769 15:08:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.769 15:08:38 -- paths/export.sh@5 -- # export PATH 00:18:07.769 15:08:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.769 15:08:38 -- nvmf/common.sh@46 -- # : 0 00:18:07.769 15:08:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:07.769 15:08:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:07.769 15:08:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:07.769 15:08:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.769 15:08:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.770 15:08:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:07.770 15:08:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:07.770 15:08:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:07.770 15:08:38 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:07.770 15:08:38 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:07.770 15:08:38 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:07.770 15:08:38 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:07.770 15:08:38 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:07.770 15:08:38 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:07.770 15:08:38 -- host/multipath.sh@30 -- # nvmftestinit 00:18:07.770 15:08:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:07.770 15:08:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:07.770 15:08:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:07.770 15:08:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:07.770 15:08:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:07.770 15:08:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.770 15:08:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.770 15:08:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.770 15:08:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:07.770 15:08:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:07.770 15:08:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:07.770 15:08:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:07.770 15:08:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:07.770 15:08:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:07.770 15:08:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.770 15:08:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:07.770 15:08:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:07.770 15:08:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:07.770 15:08:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:07.770 15:08:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:07.770 15:08:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:07.770 15:08:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.770 15:08:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:07.770 15:08:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:07.770 15:08:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:07.770 15:08:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:07.770 15:08:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:07.770 15:08:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:07.770 Cannot find device "nvmf_tgt_br" 00:18:07.770 15:08:38 -- nvmf/common.sh@154 -- # true 00:18:07.770 15:08:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:07.770 Cannot find device "nvmf_tgt_br2" 00:18:07.770 15:08:38 -- nvmf/common.sh@155 -- # true 00:18:07.770 15:08:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:07.770 15:08:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:07.770 Cannot find device "nvmf_tgt_br" 00:18:07.770 15:08:38 -- nvmf/common.sh@157 -- # true 00:18:07.770 15:08:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:07.770 Cannot find device "nvmf_tgt_br2" 00:18:07.770 15:08:38 -- nvmf/common.sh@158 -- # true 00:18:07.770 15:08:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:07.770 15:08:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:07.770 15:08:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:07.770 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:07.770 15:08:38 -- nvmf/common.sh@161 -- # true 00:18:07.770 15:08:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:07.770 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:07.770 15:08:38 -- nvmf/common.sh@162 -- # true 00:18:07.770 15:08:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:07.770 15:08:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:07.770 15:08:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:07.770 15:08:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:07.770 15:08:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:07.770 15:08:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:08.028 15:08:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:08.028 15:08:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:08.028 15:08:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:08.028 15:08:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:08.028 15:08:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:08.028 15:08:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:08.028 15:08:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:08.028 15:08:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:08.028 15:08:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:08.028 15:08:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:08.028 15:08:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:08.028 15:08:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:08.028 15:08:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:08.028 15:08:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:08.028 15:08:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:08.028 15:08:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:08.028 15:08:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:08.028 15:08:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:08.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:08.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:18:08.028 00:18:08.028 --- 10.0.0.2 ping statistics --- 00:18:08.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.028 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:08.028 15:08:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:08.028 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:08.028 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:18:08.028 00:18:08.028 --- 10.0.0.3 ping statistics --- 00:18:08.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.028 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:18:08.029 15:08:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:08.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:08.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:18:08.029 00:18:08.029 --- 10.0.0.1 ping statistics --- 00:18:08.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.029 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:18:08.029 15:08:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:08.029 15:08:38 -- nvmf/common.sh@421 -- # return 0 00:18:08.029 15:08:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:08.029 15:08:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:08.029 15:08:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:08.029 15:08:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:08.029 15:08:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:08.029 15:08:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:08.029 15:08:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:08.029 15:08:38 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:18:08.029 15:08:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:08.029 15:08:38 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:08.029 15:08:38 -- common/autotest_common.sh@10 -- # set +x 00:18:08.029 15:08:38 -- nvmf/common.sh@469 -- # nvmfpid=84215 00:18:08.029 15:08:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:08.029 15:08:38 -- nvmf/common.sh@470 -- # waitforlisten 84215 00:18:08.029 15:08:38 -- common/autotest_common.sh@829 -- # '[' -z 84215 ']' 00:18:08.029 15:08:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.029 15:08:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:08.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.029 15:08:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.029 15:08:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:08.029 15:08:38 -- common/autotest_common.sh@10 -- # set +x 00:18:08.029 [2024-11-20 15:08:38.802514] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:08.029 [2024-11-20 15:08:38.803055] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.288 [2024-11-20 15:08:38.944690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:08.288 [2024-11-20 15:08:38.985187] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:08.288 [2024-11-20 15:08:38.985562] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.288 [2024-11-20 15:08:38.985739] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.288 [2024-11-20 15:08:38.985975] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.288 [2024-11-20 15:08:38.986262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.288 [2024-11-20 15:08:38.986265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.226 15:08:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:09.226 15:08:39 -- common/autotest_common.sh@862 -- # return 0 00:18:09.226 15:08:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:09.226 15:08:39 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:09.226 15:08:39 -- common/autotest_common.sh@10 -- # set +x 00:18:09.226 15:08:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.226 15:08:39 -- host/multipath.sh@33 -- # nvmfapp_pid=84215 00:18:09.226 15:08:39 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:09.484 [2024-11-20 15:08:40.091118] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:09.485 15:08:40 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:09.743 Malloc0 00:18:09.743 15:08:40 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:10.001 15:08:40 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:10.259 15:08:40 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:10.518 [2024-11-20 15:08:41.092007] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.518 15:08:41 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:10.776 [2024-11-20 15:08:41.340154] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:10.776 15:08:41 -- host/multipath.sh@44 -- # bdevperf_pid=84271 00:18:10.776 15:08:41 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:10.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:10.776 15:08:41 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:10.776 15:08:41 -- host/multipath.sh@47 -- # waitforlisten 84271 /var/tmp/bdevperf.sock 00:18:10.776 15:08:41 -- common/autotest_common.sh@829 -- # '[' -z 84271 ']' 00:18:10.776 15:08:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:10.776 15:08:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.776 15:08:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:10.776 15:08:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.776 15:08:41 -- common/autotest_common.sh@10 -- # set +x 00:18:11.710 15:08:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:11.710 15:08:42 -- common/autotest_common.sh@862 -- # return 0 00:18:11.710 15:08:42 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:11.969 15:08:42 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:12.534 Nvme0n1 00:18:12.534 15:08:43 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:12.793 Nvme0n1 00:18:12.793 15:08:43 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:12.793 15:08:43 -- host/multipath.sh@78 -- # sleep 1 00:18:13.771 15:08:44 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:13.771 15:08:44 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:14.030 15:08:44 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:14.289 15:08:45 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:14.289 15:08:45 -- host/multipath.sh@65 -- # dtrace_pid=84316 00:18:14.289 15:08:45 -- host/multipath.sh@66 -- # sleep 6 00:18:14.289 15:08:45 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84215 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:20.859 15:08:51 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:20.859 15:08:51 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:20.859 15:08:51 -- host/multipath.sh@67 -- # active_port=4421 00:18:20.859 15:08:51 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:20.859 Attaching 4 probes... 00:18:20.859 @path[10.0.0.2, 4421]: 18155 00:18:20.859 @path[10.0.0.2, 4421]: 18030 00:18:20.859 @path[10.0.0.2, 4421]: 18797 00:18:20.859 @path[10.0.0.2, 4421]: 18744 00:18:20.859 @path[10.0.0.2, 4421]: 18711 00:18:20.859 15:08:51 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:20.859 15:08:51 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:20.859 15:08:51 -- host/multipath.sh@69 -- # sed -n 1p 00:18:20.859 15:08:51 -- host/multipath.sh@69 -- # port=4421 00:18:20.859 15:08:51 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:20.859 15:08:51 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:20.859 15:08:51 -- host/multipath.sh@72 -- # kill 84316 00:18:20.859 15:08:51 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:20.859 15:08:51 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:20.859 15:08:51 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:21.118 15:08:51 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:21.377 15:08:51 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:21.377 15:08:51 -- host/multipath.sh@65 -- # dtrace_pid=84435 00:18:21.377 15:08:51 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84215 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:21.377 15:08:51 -- host/multipath.sh@66 -- # sleep 6 00:18:27.938 15:08:57 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:27.938 15:08:57 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:27.938 15:08:58 -- host/multipath.sh@67 -- # active_port=4420 00:18:27.938 15:08:58 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:27.938 Attaching 4 probes... 00:18:27.938 @path[10.0.0.2, 4420]: 18271 00:18:27.938 @path[10.0.0.2, 4420]: 18638 00:18:27.938 @path[10.0.0.2, 4420]: 18597 00:18:27.938 @path[10.0.0.2, 4420]: 18659 00:18:27.938 @path[10.0.0.2, 4420]: 18582 00:18:27.938 15:08:58 -- host/multipath.sh@69 -- # sed -n 1p 00:18:27.938 15:08:58 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:27.938 15:08:58 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:27.938 15:08:58 -- host/multipath.sh@69 -- # port=4420 00:18:27.938 15:08:58 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:27.938 15:08:58 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:27.938 15:08:58 -- host/multipath.sh@72 -- # kill 84435 00:18:27.938 15:08:58 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:27.938 15:08:58 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:27.938 15:08:58 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:27.938 15:08:58 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:27.938 15:08:58 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:27.938 15:08:58 -- host/multipath.sh@65 -- # dtrace_pid=84553 00:18:27.938 15:08:58 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84215 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:27.938 15:08:58 -- host/multipath.sh@66 -- # sleep 6 00:18:34.497 15:09:04 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:34.497 15:09:04 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:34.497 15:09:05 -- host/multipath.sh@67 -- # active_port=4421 00:18:34.497 15:09:05 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:34.497 Attaching 4 probes... 00:18:34.497 @path[10.0.0.2, 4421]: 12572 00:18:34.497 @path[10.0.0.2, 4421]: 18049 00:18:34.497 @path[10.0.0.2, 4421]: 18051 00:18:34.497 @path[10.0.0.2, 4421]: 18318 00:18:34.497 @path[10.0.0.2, 4421]: 18105 00:18:34.497 15:09:05 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:34.497 15:09:05 -- host/multipath.sh@69 -- # sed -n 1p 00:18:34.497 15:09:05 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:34.497 15:09:05 -- host/multipath.sh@69 -- # port=4421 00:18:34.497 15:09:05 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:34.497 15:09:05 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:34.497 15:09:05 -- host/multipath.sh@72 -- # kill 84553 00:18:34.497 15:09:05 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:34.497 15:09:05 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:34.497 15:09:05 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:34.784 15:09:05 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:35.083 15:09:05 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:35.083 15:09:05 -- host/multipath.sh@65 -- # dtrace_pid=84671 00:18:35.083 15:09:05 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84215 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:35.083 15:09:05 -- host/multipath.sh@66 -- # sleep 6 00:18:41.642 15:09:11 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:41.642 15:09:11 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:41.642 15:09:11 -- host/multipath.sh@67 -- # active_port= 00:18:41.642 15:09:11 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:41.642 Attaching 4 probes... 00:18:41.642 00:18:41.642 00:18:41.642 00:18:41.642 00:18:41.642 00:18:41.642 15:09:11 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:41.642 15:09:11 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:41.642 15:09:11 -- host/multipath.sh@69 -- # sed -n 1p 00:18:41.642 15:09:11 -- host/multipath.sh@69 -- # port= 00:18:41.642 15:09:11 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:41.642 15:09:11 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:41.642 15:09:11 -- host/multipath.sh@72 -- # kill 84671 00:18:41.642 15:09:11 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:41.642 15:09:11 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:41.642 15:09:11 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:41.642 15:09:12 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:41.900 15:09:12 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:41.900 15:09:12 -- host/multipath.sh@65 -- # dtrace_pid=84783 00:18:41.900 15:09:12 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84215 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:41.900 15:09:12 -- host/multipath.sh@66 -- # sleep 6 00:18:48.461 15:09:18 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:48.461 15:09:18 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:48.461 15:09:18 -- host/multipath.sh@67 -- # active_port=4421 00:18:48.461 15:09:18 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:48.461 Attaching 4 probes... 00:18:48.461 @path[10.0.0.2, 4421]: 16081 00:18:48.461 @path[10.0.0.2, 4421]: 18051 00:18:48.461 @path[10.0.0.2, 4421]: 18119 00:18:48.461 @path[10.0.0.2, 4421]: 17501 00:18:48.461 @path[10.0.0.2, 4421]: 18128 00:18:48.461 15:09:18 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:48.461 15:09:18 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:48.461 15:09:18 -- host/multipath.sh@69 -- # sed -n 1p 00:18:48.461 15:09:18 -- host/multipath.sh@69 -- # port=4421 00:18:48.461 15:09:18 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:48.461 15:09:18 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:48.461 15:09:18 -- host/multipath.sh@72 -- # kill 84783 00:18:48.461 15:09:18 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:48.461 15:09:18 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:48.461 [2024-11-20 15:09:19.059927] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.461 [2024-11-20 15:09:19.060188] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.461 [2024-11-20 15:09:19.060366] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.461 [2024-11-20 15:09:19.060537] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.461 [2024-11-20 15:09:19.060711] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.461 [2024-11-20 15:09:19.060728] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.461 [2024-11-20 15:09:19.060737] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.461 [2024-11-20 15:09:19.060746] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.461 [2024-11-20 15:09:19.060754] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.461 [2024-11-20 15:09:19.060763] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.461 [2024-11-20 15:09:19.060771] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.461 [2024-11-20 15:09:19.060779] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.461 [2024-11-20 15:09:19.060787] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.461 [2024-11-20 15:09:19.060796] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.461 [2024-11-20 15:09:19.060804] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.461 [2024-11-20 15:09:19.060813] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.462 [2024-11-20 15:09:19.060821] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.462 [2024-11-20 15:09:19.060829] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.462 [2024-11-20 15:09:19.060837] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.462 [2024-11-20 15:09:19.060846] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.462 [2024-11-20 15:09:19.060854] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.462 [2024-11-20 15:09:19.060862] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.462 [2024-11-20 15:09:19.060871] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.462 [2024-11-20 15:09:19.060879] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.462 [2024-11-20 15:09:19.060888] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.462 [2024-11-20 15:09:19.060896] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.462 [2024-11-20 15:09:19.060904] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.462 [2024-11-20 15:09:19.060912] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.462 [2024-11-20 15:09:19.060920] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.462 [2024-11-20 15:09:19.060929] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24177a0 is same with the state(5) to be set 00:18:48.462 15:09:19 -- host/multipath.sh@101 -- # sleep 1 00:18:49.397 15:09:20 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:49.397 15:09:20 -- host/multipath.sh@65 -- # dtrace_pid=84907 00:18:49.397 15:09:20 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84215 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:49.397 15:09:20 -- host/multipath.sh@66 -- # sleep 6 00:18:55.957 15:09:26 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:55.957 15:09:26 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:55.957 15:09:26 -- host/multipath.sh@67 -- # active_port=4420 00:18:55.957 15:09:26 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:55.957 Attaching 4 probes... 00:18:55.957 @path[10.0.0.2, 4420]: 16706 00:18:55.957 @path[10.0.0.2, 4420]: 16892 00:18:55.957 @path[10.0.0.2, 4420]: 18051 00:18:55.957 @path[10.0.0.2, 4420]: 17282 00:18:55.957 @path[10.0.0.2, 4420]: 18112 00:18:55.957 15:09:26 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:55.957 15:09:26 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:55.957 15:09:26 -- host/multipath.sh@69 -- # sed -n 1p 00:18:55.957 15:09:26 -- host/multipath.sh@69 -- # port=4420 00:18:55.957 15:09:26 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:55.957 15:09:26 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:55.957 15:09:26 -- host/multipath.sh@72 -- # kill 84907 00:18:55.957 15:09:26 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:55.957 15:09:26 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:55.957 [2024-11-20 15:09:26.625431] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:55.957 15:09:26 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:56.216 15:09:26 -- host/multipath.sh@111 -- # sleep 6 00:19:02.771 15:09:32 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:19:02.771 15:09:32 -- host/multipath.sh@65 -- # dtrace_pid=85081 00:19:02.771 15:09:32 -- host/multipath.sh@66 -- # sleep 6 00:19:02.771 15:09:32 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 84215 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:19:09.379 15:09:38 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:19:09.379 15:09:38 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:19:09.379 15:09:39 -- host/multipath.sh@67 -- # active_port=4421 00:19:09.379 15:09:39 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:09.379 Attaching 4 probes... 00:19:09.379 @path[10.0.0.2, 4421]: 17563 00:19:09.379 @path[10.0.0.2, 4421]: 17921 00:19:09.379 @path[10.0.0.2, 4421]: 18006 00:19:09.379 @path[10.0.0.2, 4421]: 18009 00:19:09.379 @path[10.0.0.2, 4421]: 18047 00:19:09.379 15:09:39 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:19:09.379 15:09:39 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:19:09.379 15:09:39 -- host/multipath.sh@69 -- # sed -n 1p 00:19:09.379 15:09:39 -- host/multipath.sh@69 -- # port=4421 00:19:09.379 15:09:39 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:19:09.379 15:09:39 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:19:09.379 15:09:39 -- host/multipath.sh@72 -- # kill 85081 00:19:09.379 15:09:39 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:09.379 15:09:39 -- host/multipath.sh@114 -- # killprocess 84271 00:19:09.379 15:09:39 -- common/autotest_common.sh@936 -- # '[' -z 84271 ']' 00:19:09.379 15:09:39 -- common/autotest_common.sh@940 -- # kill -0 84271 00:19:09.379 15:09:39 -- common/autotest_common.sh@941 -- # uname 00:19:09.379 15:09:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:09.379 15:09:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84271 00:19:09.379 killing process with pid 84271 00:19:09.379 15:09:39 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:09.379 15:09:39 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:09.379 15:09:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84271' 00:19:09.379 15:09:39 -- common/autotest_common.sh@955 -- # kill 84271 00:19:09.379 15:09:39 -- common/autotest_common.sh@960 -- # wait 84271 00:19:09.379 Connection closed with partial response: 00:19:09.379 00:19:09.379 00:19:09.379 15:09:39 -- host/multipath.sh@116 -- # wait 84271 00:19:09.379 15:09:39 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:09.379 [2024-11-20 15:08:41.408905] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:09.379 [2024-11-20 15:08:41.409012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84271 ] 00:19:09.379 [2024-11-20 15:08:41.543940] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.379 [2024-11-20 15:08:41.591358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.379 Running I/O for 90 seconds... 00:19:09.380 [2024-11-20 15:08:51.901716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:28448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.380 [2024-11-20 15:08:51.901790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:09.380 [2024-11-20 15:08:51.901848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.380 [2024-11-20 15:08:51.901871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:09.380 [2024-11-20 15:08:51.901911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:27760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.380 [2024-11-20 15:08:51.901940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:09.380 [2024-11-20 15:08:51.901965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:27776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.380 [2024-11-20 15:08:51.901980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:09.380 [2024-11-20 15:08:51.902002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:27784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.380 [2024-11-20 15:08:51.902018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:09.380 [2024-11-20 15:08:51.902040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.380 [2024-11-20 15:08:51.902055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:09.380 [2024-11-20 15:08:51.902077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.380 [2024-11-20 15:08:51.902092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:09.380 [2024-11-20 15:08:51.902114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:27864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.380 [2024-11-20 15:08:51.902129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:09.380 [2024-11-20 15:08:51.902150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:27880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.380 [2024-11-20 15:08:51.902165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:09.380 [2024-11-20 15:08:51.902187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.380 [2024-11-20 15:08:51.902202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:09.380 [2024-11-20 15:08:51.902225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:28464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.380 [2024-11-20 15:08:51.902262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:09.380 [2024-11-20 15:08:51.902288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:28472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.380 [2024-11-20 15:08:51.902303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:09.380 [2024-11-20 15:08:51.902334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.380 [2024-11-20 15:08:51.902349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:09.380 [2024-11-20 15:08:51.902370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.380 [2024-11-20 15:08:51.902385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:09.380 [2024-11-20 15:08:51.902407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.380 [2024-11-20 15:08:51.902422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:09.380 [2024-11-20 15:08:51.902444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:28504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.380 [2024-11-20 15:08:51.902459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:09.380 [2024-11-20 15:08:51.902481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.380 [2024-11-20 15:08:51.902496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:09.380 [2024-11-20 15:08:51.902517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:28520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.380 [2024-11-20 15:08:51.902532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:09.380 [2024-11-20 15:08:51.902554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:27888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.380 [2024-11-20 15:08:51.902569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:09.380 [2024-11-20 15:08:51.902591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.380 [2024-11-20 15:08:51.902606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:09.380 [2024-11-20 15:08:51.902628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:27936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.380 [2024-11-20 15:08:51.902658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:09.380 [2024-11-20 15:08:51.902683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:27952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.380 [2024-11-20 15:08:51.902698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:09.380 [2024-11-20 15:08:51.902721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:27960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.380 [2024-11-20 15:08:51.902737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:09.380 [2024-11-20 15:08:51.902769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.380 [2024-11-20 15:08:51.902786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:09.380 [2024-11-20 15:08:51.902808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.380 [2024-11-20 15:08:51.902823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:09.381 [2024-11-20 15:08:51.902846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.381 [2024-11-20 15:08:51.902862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:09.381 [2024-11-20 15:08:51.902885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:28528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.381 [2024-11-20 15:08:51.902900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:09.381 [2024-11-20 15:08:51.902923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.381 [2024-11-20 15:08:51.902938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:09.381 [2024-11-20 15:08:51.902996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:28544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.381 [2024-11-20 15:08:51.903021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:09.381 [2024-11-20 15:08:51.903045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:28552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.381 [2024-11-20 15:08:51.903060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:09.381 [2024-11-20 15:08:51.903082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:28560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.381 [2024-11-20 15:08:51.903098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.381 [2024-11-20 15:08:51.903121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:28568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.381 [2024-11-20 15:08:51.903136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:09.381 [2024-11-20 15:08:51.903158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.381 [2024-11-20 15:08:51.903173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:09.381 [2024-11-20 15:08:51.903208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.381 [2024-11-20 15:08:51.903230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:09.381 [2024-11-20 15:08:51.903253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.381 [2024-11-20 15:08:51.903269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:09.381 [2024-11-20 15:08:51.903302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:28600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.381 [2024-11-20 15:08:51.903319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:09.381 [2024-11-20 15:08:51.903342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:28608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.381 [2024-11-20 15:08:51.903357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:09.381 [2024-11-20 15:08:51.903379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.381 [2024-11-20 15:08:51.903394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:09.381 [2024-11-20 15:08:51.903417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:28624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.381 [2024-11-20 15:08:51.903433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:09.381 [2024-11-20 15:08:51.903455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:28632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.381 [2024-11-20 15:08:51.903470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:09.381 [2024-11-20 15:08:51.903492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.381 [2024-11-20 15:08:51.903507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:09.381 [2024-11-20 15:08:51.903529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.381 [2024-11-20 15:08:51.903546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:09.381 [2024-11-20 15:08:51.903569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:28656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.381 [2024-11-20 15:08:51.903584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:09.381 [2024-11-20 15:08:51.903606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.381 [2024-11-20 15:08:51.903621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:09.381 [2024-11-20 15:08:51.903657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:28672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.381 [2024-11-20 15:08:51.903676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:09.381 [2024-11-20 15:08:51.903699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.381 [2024-11-20 15:08:51.903714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:09.381 [2024-11-20 15:08:51.903737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:28688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.381 [2024-11-20 15:08:51.903752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:09.381 [2024-11-20 15:08:51.903774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:28696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.381 [2024-11-20 15:08:51.903798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:09.381 [2024-11-20 15:08:51.903821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.381 [2024-11-20 15:08:51.903837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:09.381 [2024-11-20 15:08:51.903859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:28712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.381 [2024-11-20 15:08:51.903874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:09.381 [2024-11-20 15:08:51.903896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.382 [2024-11-20 15:08:51.903912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:09.382 [2024-11-20 15:08:51.903934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.382 [2024-11-20 15:08:51.903949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:09.382 [2024-11-20 15:08:51.903971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.382 [2024-11-20 15:08:51.903987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:09.382 [2024-11-20 15:08:51.904009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.382 [2024-11-20 15:08:51.904024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:09.382 [2024-11-20 15:08:51.904047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.382 [2024-11-20 15:08:51.904062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:09.382 [2024-11-20 15:08:51.904084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:28120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.382 [2024-11-20 15:08:51.904099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:09.382 [2024-11-20 15:08:51.904122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:28128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.382 [2024-11-20 15:08:51.904137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:09.382 [2024-11-20 15:08:51.904160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:28176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.382 [2024-11-20 15:08:51.904175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:09.382 [2024-11-20 15:08:51.904197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.382 [2024-11-20 15:08:51.904213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:09.382 [2024-11-20 15:08:51.904235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:28728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.382 [2024-11-20 15:08:51.904257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:09.382 [2024-11-20 15:08:51.904281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.382 [2024-11-20 15:08:51.904296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:09.382 [2024-11-20 15:08:51.904318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:28744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.382 [2024-11-20 15:08:51.904333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:09.382 [2024-11-20 15:08:51.904356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:28752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.382 [2024-11-20 15:08:51.904371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:09.382 [2024-11-20 15:08:51.904393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.382 [2024-11-20 15:08:51.904408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:09.382 [2024-11-20 15:08:51.904435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:28768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.382 [2024-11-20 15:08:51.904452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:09.382 [2024-11-20 15:08:51.904474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:28776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.382 [2024-11-20 15:08:51.904489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:09.382 [2024-11-20 15:08:51.904511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:28784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.382 [2024-11-20 15:08:51.904526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:09.382 [2024-11-20 15:08:51.904548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:28792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.382 [2024-11-20 15:08:51.904563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:09.382 [2024-11-20 15:08:51.904585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.382 [2024-11-20 15:08:51.904600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:09.382 [2024-11-20 15:08:51.904622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.382 [2024-11-20 15:08:51.904652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:09.382 [2024-11-20 15:08:51.904679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:28816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.382 [2024-11-20 15:08:51.904695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:09.382 [2024-11-20 15:08:51.904718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:28824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.382 [2024-11-20 15:08:51.904732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:09.382 [2024-11-20 15:08:51.904763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:28832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.382 [2024-11-20 15:08:51.904780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:09.382 [2024-11-20 15:08:51.904803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.382 [2024-11-20 15:08:51.904818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:09.382 [2024-11-20 15:08:51.904840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:28232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.382 [2024-11-20 15:08:51.904855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:09.382 [2024-11-20 15:08:51.904877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.382 [2024-11-20 15:08:51.904893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:09.382 [2024-11-20 15:08:51.904915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:28264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.382 [2024-11-20 15:08:51.904930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.904953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:28272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.383 [2024-11-20 15:08:51.904968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.904990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.383 [2024-11-20 15:08:51.905005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.905027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:28304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.383 [2024-11-20 15:08:51.905043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.905065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:28312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.383 [2024-11-20 15:08:51.905080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.905102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.383 [2024-11-20 15:08:51.905117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.905139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.383 [2024-11-20 15:08:51.905155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.905183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.383 [2024-11-20 15:08:51.905199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.905228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.383 [2024-11-20 15:08:51.905244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.905266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.383 [2024-11-20 15:08:51.905282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.905308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.383 [2024-11-20 15:08:51.905325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.905347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:28888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.383 [2024-11-20 15:08:51.905363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.905385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:28896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.383 [2024-11-20 15:08:51.905400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.905422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.383 [2024-11-20 15:08:51.905437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.905459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:28912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.383 [2024-11-20 15:08:51.905475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.905497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.383 [2024-11-20 15:08:51.905512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.905535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:28928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.383 [2024-11-20 15:08:51.905550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.905572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:28936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.383 [2024-11-20 15:08:51.905588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.905610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.383 [2024-11-20 15:08:51.905625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.905661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:28952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.383 [2024-11-20 15:08:51.905679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.905702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.383 [2024-11-20 15:08:51.905725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.905749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.383 [2024-11-20 15:08:51.905764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.905786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:28976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.383 [2024-11-20 15:08:51.905802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.905824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:28984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.383 [2024-11-20 15:08:51.905839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.905862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.383 [2024-11-20 15:08:51.905882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.905921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:29000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.383 [2024-11-20 15:08:51.905945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.905972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.383 [2024-11-20 15:08:51.905989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:09.383 [2024-11-20 15:08:51.906012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.383 [2024-11-20 15:08:51.906027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:09.384 [2024-11-20 15:08:51.906050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.384 [2024-11-20 15:08:51.906065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:09.384 [2024-11-20 15:08:51.906087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.384 [2024-11-20 15:08:51.906102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:09.384 [2024-11-20 15:08:51.906124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:28384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.384 [2024-11-20 15:08:51.906140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:09.384 [2024-11-20 15:08:51.906162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.384 [2024-11-20 15:08:51.906177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:09.384 [2024-11-20 15:08:51.906200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:28408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.384 [2024-11-20 15:08:51.906225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:09.384 [2024-11-20 15:08:51.906249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.384 [2024-11-20 15:08:51.906265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:09.384 [2024-11-20 15:08:51.907862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.384 [2024-11-20 15:08:51.907895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:09.384 [2024-11-20 15:08:51.907925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.384 [2024-11-20 15:08:51.907943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:09.384 [2024-11-20 15:08:51.907966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.384 [2024-11-20 15:08:51.907983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:09.384 [2024-11-20 15:08:51.908005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:29032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.384 [2024-11-20 15:08:51.908021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:09.384 [2024-11-20 15:08:51.908043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.384 [2024-11-20 15:08:51.908058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:09.384 [2024-11-20 15:08:51.908081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:29048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.384 [2024-11-20 15:08:51.908096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:09.384 [2024-11-20 15:08:51.908118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.384 [2024-11-20 15:08:51.908134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:09.384 [2024-11-20 15:08:51.908156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.384 [2024-11-20 15:08:51.908172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:09.384 [2024-11-20 15:08:51.908195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:29072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.384 [2024-11-20 15:08:51.908210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:09.384 [2024-11-20 15:08:51.908233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.384 [2024-11-20 15:08:51.908248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:09.384 [2024-11-20 15:08:51.908284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.384 [2024-11-20 15:08:51.908304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:09.384 [2024-11-20 15:08:51.908340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.384 [2024-11-20 15:08:51.908358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:09.384 [2024-11-20 15:08:51.908381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.384 [2024-11-20 15:08:51.908396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:09.384 [2024-11-20 15:08:51.908418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.384 [2024-11-20 15:08:51.908434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:09.384 [2024-11-20 15:08:51.908456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:29120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.384 [2024-11-20 15:08:51.908471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:09.384 [2024-11-20 15:08:51.908494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.384 [2024-11-20 15:08:51.908509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:09.384 [2024-11-20 15:08:51.908532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:29136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.384 [2024-11-20 15:08:51.908548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:09.384 [2024-11-20 15:08:58.470964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.384 [2024-11-20 15:08:58.471039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:09.384 [2024-11-20 15:08:58.471076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.384 [2024-11-20 15:08:58.471094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:09.384 [2024-11-20 15:08:58.471117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.384 [2024-11-20 15:08:58.471133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:09.385 [2024-11-20 15:08:58.471154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.385 [2024-11-20 15:08:58.471169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:09.385 [2024-11-20 15:08:58.471190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.385 [2024-11-20 15:08:58.471218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:09.385 [2024-11-20 15:08:58.471241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.385 [2024-11-20 15:08:58.471257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:09.385 [2024-11-20 15:08:58.471300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.385 [2024-11-20 15:08:58.471316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:09.385 [2024-11-20 15:08:58.471338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.385 [2024-11-20 15:08:58.471353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:09.385 [2024-11-20 15:08:58.471375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.385 [2024-11-20 15:08:58.471390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:09.385 [2024-11-20 15:08:58.471411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.385 [2024-11-20 15:08:58.471425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:09.385 [2024-11-20 15:08:58.471446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.385 [2024-11-20 15:08:58.471461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:09.385 [2024-11-20 15:08:58.471482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.385 [2024-11-20 15:08:58.471497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:09.385 [2024-11-20 15:08:58.471518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.385 [2024-11-20 15:08:58.471533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:09.385 [2024-11-20 15:08:58.471554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.385 [2024-11-20 15:08:58.471569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:09.385 [2024-11-20 15:08:58.471591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.385 [2024-11-20 15:08:58.471606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:09.385 [2024-11-20 15:08:58.471627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.385 [2024-11-20 15:08:58.471657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:09.385 [2024-11-20 15:08:58.471682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.385 [2024-11-20 15:08:58.471697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:09.385 [2024-11-20 15:08:58.471728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.385 [2024-11-20 15:08:58.471743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:09.385 [2024-11-20 15:08:58.471773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.385 [2024-11-20 15:08:58.471791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:09.385 [2024-11-20 15:08:58.471813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.385 [2024-11-20 15:08:58.471827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:09.385 [2024-11-20 15:08:58.471849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.385 [2024-11-20 15:08:58.471863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:09.385 [2024-11-20 15:08:58.471885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.385 [2024-11-20 15:08:58.471899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:09.385 [2024-11-20 15:08:58.471921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.385 [2024-11-20 15:08:58.471936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:09.385 [2024-11-20 15:08:58.471957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.385 [2024-11-20 15:08:58.471972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:09.385 [2024-11-20 15:08:58.471993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.385 [2024-11-20 15:08:58.472008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:09.385 [2024-11-20 15:08:58.472029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.385 [2024-11-20 15:08:58.472044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:09.385 [2024-11-20 15:08:58.472066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.385 [2024-11-20 15:08:58.472081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:09.385 [2024-11-20 15:08:58.472102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.385 [2024-11-20 15:08:58.472117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:09.385 [2024-11-20 15:08:58.472138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.386 [2024-11-20 15:08:58.472153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:09.386 [2024-11-20 15:08:58.472177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.386 [2024-11-20 15:08:58.472192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:09.386 [2024-11-20 15:08:58.472241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.386 [2024-11-20 15:08:58.472284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:09.386 [2024-11-20 15:08:58.472309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.386 [2024-11-20 15:08:58.472324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:09.386 [2024-11-20 15:08:58.472346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.386 [2024-11-20 15:08:58.472362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:09.386 [2024-11-20 15:08:58.472384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.386 [2024-11-20 15:08:58.472399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:09.386 [2024-11-20 15:08:58.472421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.386 [2024-11-20 15:08:58.472436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:09.386 [2024-11-20 15:08:58.472457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.386 [2024-11-20 15:08:58.472472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:09.386 [2024-11-20 15:08:58.472493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.386 [2024-11-20 15:08:58.472508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:09.386 [2024-11-20 15:08:58.472530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.386 [2024-11-20 15:08:58.472545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:09.386 [2024-11-20 15:08:58.472566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.386 [2024-11-20 15:08:58.472581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:09.386 [2024-11-20 15:08:58.472602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.386 [2024-11-20 15:08:58.472617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:09.386 [2024-11-20 15:08:58.472651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.386 [2024-11-20 15:08:58.472669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:09.386 [2024-11-20 15:08:58.472691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.386 [2024-11-20 15:08:58.472706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:09.386 [2024-11-20 15:08:58.472728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.386 [2024-11-20 15:08:58.472754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:09.386 [2024-11-20 15:08:58.472778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.386 [2024-11-20 15:08:58.472793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:09.386 [2024-11-20 15:08:58.472814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.386 [2024-11-20 15:08:58.472829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:09.386 [2024-11-20 15:08:58.472851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.386 [2024-11-20 15:08:58.472865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:09.386 [2024-11-20 15:08:58.472887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.386 [2024-11-20 15:08:58.472902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:09.386 [2024-11-20 15:08:58.472924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.386 [2024-11-20 15:08:58.472939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.472960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.387 [2024-11-20 15:08:58.472975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.472997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.387 [2024-11-20 15:08:58.473012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.473033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.387 [2024-11-20 15:08:58.473048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.473070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.387 [2024-11-20 15:08:58.473084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.473106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.387 [2024-11-20 15:08:58.473121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.473142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.387 [2024-11-20 15:08:58.473157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.473179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.387 [2024-11-20 15:08:58.473193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.473222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.387 [2024-11-20 15:08:58.473238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.473260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.387 [2024-11-20 15:08:58.473274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.473296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.387 [2024-11-20 15:08:58.473311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.473332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.387 [2024-11-20 15:08:58.473347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.473369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.387 [2024-11-20 15:08:58.473384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.473405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.387 [2024-11-20 15:08:58.473420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.473441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.387 [2024-11-20 15:08:58.473456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.473477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.387 [2024-11-20 15:08:58.473492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.473514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.387 [2024-11-20 15:08:58.473529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.473550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.387 [2024-11-20 15:08:58.473569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.473592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.387 [2024-11-20 15:08:58.473607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.473629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.387 [2024-11-20 15:08:58.473658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.473689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.387 [2024-11-20 15:08:58.473705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.473727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.387 [2024-11-20 15:08:58.473742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.473764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.387 [2024-11-20 15:08:58.473778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.473800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.387 [2024-11-20 15:08:58.473814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.473836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.387 [2024-11-20 15:08:58.473851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.473872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.387 [2024-11-20 15:08:58.473887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.473908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.387 [2024-11-20 15:08:58.473923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.473945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.387 [2024-11-20 15:08:58.473960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.473999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.387 [2024-11-20 15:08:58.474018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:09.387 [2024-11-20 15:08:58.474041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.387 [2024-11-20 15:08:58.474056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.474078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.388 [2024-11-20 15:08:58.474092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.474114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.388 [2024-11-20 15:08:58.474129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.474151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.388 [2024-11-20 15:08:58.474173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.474196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.388 [2024-11-20 15:08:58.474212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.474234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.388 [2024-11-20 15:08:58.474249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.474271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.388 [2024-11-20 15:08:58.474286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.474308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.388 [2024-11-20 15:08:58.474322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.474344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.388 [2024-11-20 15:08:58.474359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.474381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.388 [2024-11-20 15:08:58.474395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.474417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.388 [2024-11-20 15:08:58.474432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.474454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.388 [2024-11-20 15:08:58.474468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.474490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.388 [2024-11-20 15:08:58.474504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.474526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.388 [2024-11-20 15:08:58.474541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.474562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.388 [2024-11-20 15:08:58.474577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.474599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.388 [2024-11-20 15:08:58.474620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.474663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.388 [2024-11-20 15:08:58.474682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.474704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.388 [2024-11-20 15:08:58.474719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.474741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.388 [2024-11-20 15:08:58.474756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.474777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.388 [2024-11-20 15:08:58.474792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.474813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.388 [2024-11-20 15:08:58.474828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.474850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.388 [2024-11-20 15:08:58.474865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.474886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.388 [2024-11-20 15:08:58.474901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.474922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.388 [2024-11-20 15:08:58.474937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.474959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.388 [2024-11-20 15:08:58.474973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.474995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.388 [2024-11-20 15:08:58.475009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.475030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.388 [2024-11-20 15:08:58.475045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.475067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.388 [2024-11-20 15:08:58.475081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:09.388 [2024-11-20 15:08:58.475111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.389 [2024-11-20 15:08:58.475126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:09.389 [2024-11-20 15:08:58.475148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.389 [2024-11-20 15:08:58.475163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:09.389 [2024-11-20 15:08:58.475184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.389 [2024-11-20 15:08:58.475210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:09.389 [2024-11-20 15:08:58.475244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.389 [2024-11-20 15:08:58.475260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:09.389 [2024-11-20 15:08:58.475282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.389 [2024-11-20 15:08:58.475296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:09.389 [2024-11-20 15:08:58.475318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.389 [2024-11-20 15:08:58.475333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:09.389 [2024-11-20 15:08:58.475355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.389 [2024-11-20 15:08:58.475370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:09.389 [2024-11-20 15:08:58.475391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.389 [2024-11-20 15:08:58.475406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:09.389 [2024-11-20 15:08:58.475428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.389 [2024-11-20 15:08:58.475443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:09.389 [2024-11-20 15:08:58.475465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.389 [2024-11-20 15:08:58.475480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:09.389 [2024-11-20 15:08:58.475501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.389 [2024-11-20 15:08:58.475516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:09.389 [2024-11-20 15:08:58.476884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.389 [2024-11-20 15:08:58.476914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:09.389 [2024-11-20 15:08:58.476954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.389 [2024-11-20 15:08:58.476972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:09.389 [2024-11-20 15:08:58.476995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.389 [2024-11-20 15:08:58.477010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:09.389 [2024-11-20 15:08:58.477032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.389 [2024-11-20 15:08:58.477047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:09.389 [2024-11-20 15:08:58.477069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.389 [2024-11-20 15:08:58.477084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:09.389 [2024-11-20 15:08:58.477106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.389 [2024-11-20 15:08:58.477121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:09.389 [2024-11-20 15:08:58.477142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.389 [2024-11-20 15:08:58.477157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:09.389 [2024-11-20 15:08:58.477178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.389 [2024-11-20 15:08:58.477193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:09.389 [2024-11-20 15:08:58.477215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.389 [2024-11-20 15:08:58.477229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:09.389 [2024-11-20 15:08:58.477250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.389 [2024-11-20 15:08:58.477265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:09.389 [2024-11-20 15:08:58.477287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.389 [2024-11-20 15:08:58.477302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:09.389 [2024-11-20 15:08:58.477477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.389 [2024-11-20 15:08:58.477503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:09.389 [2024-11-20 15:08:58.477529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.389 [2024-11-20 15:08:58.477545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:09.389 [2024-11-20 15:08:58.477567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.389 [2024-11-20 15:08:58.477593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:09.389 [2024-11-20 15:08:58.477617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.390 [2024-11-20 15:08:58.477632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.477670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.390 [2024-11-20 15:08:58.477686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.477708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.390 [2024-11-20 15:08:58.477723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.477745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.390 [2024-11-20 15:08:58.477759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.477781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.390 [2024-11-20 15:08:58.477796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.477817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.390 [2024-11-20 15:08:58.477832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.477854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.390 [2024-11-20 15:08:58.477868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.477890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.390 [2024-11-20 15:08:58.477905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.477926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.390 [2024-11-20 15:08:58.477941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.477963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.390 [2024-11-20 15:08:58.477978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.477999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.390 [2024-11-20 15:08:58.478014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.478036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.390 [2024-11-20 15:08:58.478059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.478082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:22616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.390 [2024-11-20 15:08:58.478097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.478119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.390 [2024-11-20 15:08:58.478134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.478173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.390 [2024-11-20 15:08:58.478193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.478216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.390 [2024-11-20 15:08:58.478233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.478255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.390 [2024-11-20 15:08:58.478270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.478291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.390 [2024-11-20 15:08:58.478306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.478327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.390 [2024-11-20 15:08:58.478342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.478364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.390 [2024-11-20 15:08:58.478378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.478400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.390 [2024-11-20 15:08:58.478414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.478436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.390 [2024-11-20 15:08:58.478451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.478472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.390 [2024-11-20 15:08:58.478487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.478508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.390 [2024-11-20 15:08:58.478523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.478553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.390 [2024-11-20 15:08:58.478569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.478591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.390 [2024-11-20 15:08:58.478606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.478628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.390 [2024-11-20 15:08:58.478658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:09.390 [2024-11-20 15:08:58.478686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.391 [2024-11-20 15:08:58.478703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:09.391 [2024-11-20 15:08:58.478724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.391 [2024-11-20 15:08:58.478739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:09.391 [2024-11-20 15:08:58.478761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.391 [2024-11-20 15:08:58.478775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:09.391 [2024-11-20 15:08:58.478799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.391 [2024-11-20 15:08:58.478815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:09.391 [2024-11-20 15:08:58.478837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.391 [2024-11-20 15:08:58.478853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:09.391 [2024-11-20 15:08:58.478876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.391 [2024-11-20 15:08:58.478891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:09.391 [2024-11-20 15:08:58.478913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.391 [2024-11-20 15:08:58.478928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:09.391 [2024-11-20 15:08:58.478949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.391 [2024-11-20 15:08:58.478964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:09.391 [2024-11-20 15:08:58.478986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.391 [2024-11-20 15:08:58.479000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:09.391 [2024-11-20 15:08:58.479030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.391 [2024-11-20 15:08:58.479046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:09.391 [2024-11-20 15:08:58.479067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.391 [2024-11-20 15:08:58.479082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:09.391 [2024-11-20 15:08:58.479103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.391 [2024-11-20 15:08:58.479118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:09.391 [2024-11-20 15:08:58.479139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.391 [2024-11-20 15:08:58.479154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:09.391 [2024-11-20 15:08:58.479175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.391 [2024-11-20 15:08:58.479189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:09.391 [2024-11-20 15:08:58.479223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.391 [2024-11-20 15:08:58.479239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:09.391 [2024-11-20 15:08:58.479261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.391 [2024-11-20 15:08:58.479275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:09.391 [2024-11-20 15:08:58.479296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.391 [2024-11-20 15:08:58.479311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:09.391 [2024-11-20 15:08:58.479333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.391 [2024-11-20 15:08:58.479347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:09.391 [2024-11-20 15:08:58.479369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.391 [2024-11-20 15:08:58.479383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:09.391 [2024-11-20 15:08:58.479420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.391 [2024-11-20 15:08:58.479439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:09.391 [2024-11-20 15:08:58.479461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.391 [2024-11-20 15:08:58.479479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:09.391 [2024-11-20 15:08:58.479501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.391 [2024-11-20 15:08:58.479524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:09.391 [2024-11-20 15:08:58.479547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.391 [2024-11-20 15:08:58.479563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:09.391 [2024-11-20 15:08:58.479584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.391 [2024-11-20 15:08:58.479599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:09.391 [2024-11-20 15:08:58.479620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.391 [2024-11-20 15:08:58.479635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:09.391 [2024-11-20 15:08:58.479672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.391 [2024-11-20 15:08:58.479688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:09.392 [2024-11-20 15:08:58.479709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.392 [2024-11-20 15:08:58.479726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:09.392 [2024-11-20 15:08:58.479748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.392 [2024-11-20 15:08:58.479763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.392 [2024-11-20 15:08:58.479784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.392 [2024-11-20 15:08:58.479799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:09.392 [2024-11-20 15:08:58.479821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.392 [2024-11-20 15:08:58.479835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:09.392 [2024-11-20 15:08:58.479857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.392 [2024-11-20 15:08:58.479872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:09.392 [2024-11-20 15:08:58.479893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.392 [2024-11-20 15:08:58.479907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:09.392 [2024-11-20 15:08:58.479929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.392 [2024-11-20 15:08:58.479944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:09.392 [2024-11-20 15:08:58.479966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.392 [2024-11-20 15:08:58.479988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:09.392 [2024-11-20 15:08:58.480011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.392 [2024-11-20 15:08:58.480026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:09.392 [2024-11-20 15:08:58.480050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.392 [2024-11-20 15:08:58.480066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:09.392 [2024-11-20 15:08:58.480087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.392 [2024-11-20 15:08:58.480104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:09.392 [2024-11-20 15:08:58.480126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.392 [2024-11-20 15:08:58.480141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:09.392 [2024-11-20 15:08:58.480162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.392 [2024-11-20 15:08:58.480177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:09.392 [2024-11-20 15:08:58.480198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.392 [2024-11-20 15:08:58.480213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:09.392 [2024-11-20 15:08:58.480235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.392 [2024-11-20 15:08:58.480249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:09.392 [2024-11-20 15:08:58.480274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.392 [2024-11-20 15:08:58.480290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:09.392 [2024-11-20 15:08:58.480312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.392 [2024-11-20 15:08:58.480326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:09.392 [2024-11-20 15:08:58.480348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.392 [2024-11-20 15:08:58.480362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:09.392 [2024-11-20 15:08:58.494487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.392 [2024-11-20 15:08:58.494573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:09.392 [2024-11-20 15:08:58.494628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.392 [2024-11-20 15:08:58.494720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:09.392 [2024-11-20 15:08:58.494780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.392 [2024-11-20 15:08:58.494814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:09.392 [2024-11-20 15:08:58.494862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.392 [2024-11-20 15:08:58.494895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:09.392 [2024-11-20 15:08:58.494942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.392 [2024-11-20 15:08:58.494974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:09.392 [2024-11-20 15:08:58.495022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.392 [2024-11-20 15:08:58.495054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:09.393 [2024-11-20 15:08:58.495102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.393 [2024-11-20 15:08:58.495133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:09.393 [2024-11-20 15:08:58.495183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.393 [2024-11-20 15:08:58.495248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:09.393 [2024-11-20 15:08:58.495300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.393 [2024-11-20 15:08:58.495333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:09.393 [2024-11-20 15:08:58.495382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.393 [2024-11-20 15:08:58.495414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:09.393 [2024-11-20 15:08:58.495463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.393 [2024-11-20 15:08:58.495495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:09.393 [2024-11-20 15:08:58.495542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.393 [2024-11-20 15:08:58.495574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:09.393 [2024-11-20 15:08:58.495622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.393 [2024-11-20 15:08:58.495696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:09.393 [2024-11-20 15:08:58.495748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.393 [2024-11-20 15:08:58.495782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:09.393 [2024-11-20 15:08:58.495849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.393 [2024-11-20 15:08:58.495883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:09.393 [2024-11-20 15:08:58.495931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.393 [2024-11-20 15:08:58.495963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:09.393 [2024-11-20 15:08:58.496011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.393 [2024-11-20 15:08:58.496043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:09.393 [2024-11-20 15:08:58.496090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.393 [2024-11-20 15:08:58.496122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:09.393 [2024-11-20 15:08:58.496170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.393 [2024-11-20 15:08:58.496201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:09.393 [2024-11-20 15:08:58.496249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.393 [2024-11-20 15:08:58.496281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:09.393 [2024-11-20 15:08:58.496328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.393 [2024-11-20 15:08:58.496360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:09.393 [2024-11-20 15:08:58.496409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.393 [2024-11-20 15:08:58.496470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:09.393 [2024-11-20 15:08:58.496534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.393 [2024-11-20 15:08:58.496569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:09.393 [2024-11-20 15:08:58.496618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.393 [2024-11-20 15:08:58.496673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:09.393 [2024-11-20 15:08:58.496731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.393 [2024-11-20 15:08:58.496764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:09.393 [2024-11-20 15:08:58.496811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.393 [2024-11-20 15:08:58.496843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:09.393 [2024-11-20 15:08:58.496914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.393 [2024-11-20 15:08:58.496950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:09.393 [2024-11-20 15:08:58.496998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.393 [2024-11-20 15:08:58.497030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:09.393 [2024-11-20 15:08:58.497077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.393 [2024-11-20 15:08:58.497109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:09.393 [2024-11-20 15:08:58.497157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.393 [2024-11-20 15:08:58.497188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:09.393 [2024-11-20 15:08:58.497236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.393 [2024-11-20 15:08:58.497268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:09.393 [2024-11-20 15:08:58.497315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.393 [2024-11-20 15:08:58.497347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:09.394 [2024-11-20 15:08:58.497394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.394 [2024-11-20 15:08:58.497425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:09.394 [2024-11-20 15:08:58.497473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.394 [2024-11-20 15:08:58.497505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:09.394 [2024-11-20 15:08:58.497552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.394 [2024-11-20 15:08:58.497583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:09.394 [2024-11-20 15:08:58.497630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.394 [2024-11-20 15:08:58.497686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:09.394 [2024-11-20 15:08:58.497736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.394 [2024-11-20 15:08:58.497770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:09.394 [2024-11-20 15:08:58.497817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.394 [2024-11-20 15:08:58.497849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:09.394 [2024-11-20 15:08:58.497897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.394 [2024-11-20 15:08:58.497944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:09.394 [2024-11-20 15:08:58.497995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.394 [2024-11-20 15:08:58.498027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:09.394 [2024-11-20 15:08:58.498075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.394 [2024-11-20 15:08:58.498107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:09.394 [2024-11-20 15:08:58.498155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.394 [2024-11-20 15:08:58.498186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:09.394 [2024-11-20 15:08:58.498234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.394 [2024-11-20 15:08:58.498265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:09.394 [2024-11-20 15:08:58.498312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.394 [2024-11-20 15:08:58.498354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:09.394 [2024-11-20 15:08:58.498401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.394 [2024-11-20 15:08:58.498433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:09.394 [2024-11-20 15:08:58.498480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.394 [2024-11-20 15:08:58.498512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:09.394 [2024-11-20 15:08:58.498559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.394 [2024-11-20 15:08:58.498591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:09.394 [2024-11-20 15:08:58.498657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.394 [2024-11-20 15:08:58.498693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:09.394 [2024-11-20 15:08:58.498742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.394 [2024-11-20 15:08:58.498773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:09.394 [2024-11-20 15:08:58.498821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.394 [2024-11-20 15:08:58.498853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:09.394 [2024-11-20 15:08:58.498901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.394 [2024-11-20 15:08:58.498946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:09.394 [2024-11-20 15:08:58.498997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.394 [2024-11-20 15:08:58.499029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:09.394 [2024-11-20 15:08:58.499078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.394 [2024-11-20 15:08:58.499110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:09.394 [2024-11-20 15:08:58.501203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.394 [2024-11-20 15:08:58.501248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:09.394 [2024-11-20 15:08:58.501288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.394 [2024-11-20 15:08:58.501310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:09.394 [2024-11-20 15:08:58.501340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.394 [2024-11-20 15:08:58.501360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:09.394 [2024-11-20 15:08:58.501389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.394 [2024-11-20 15:08:58.501409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:09.394 [2024-11-20 15:08:58.501438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.394 [2024-11-20 15:08:58.501457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.501486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.395 [2024-11-20 15:08:58.501506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.501535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.395 [2024-11-20 15:08:58.501555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.501584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.395 [2024-11-20 15:08:58.501604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.501832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.395 [2024-11-20 15:08:58.501866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.501902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.395 [2024-11-20 15:08:58.501923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.501985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.395 [2024-11-20 15:08:58.502008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.502037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.395 [2024-11-20 15:08:58.502057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.502086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.395 [2024-11-20 15:08:58.502105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.502134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.395 [2024-11-20 15:08:58.502153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.502182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.395 [2024-11-20 15:08:58.502202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.502231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.395 [2024-11-20 15:08:58.502251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.502280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.395 [2024-11-20 15:08:58.502300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.502329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.395 [2024-11-20 15:08:58.502348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.502377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.395 [2024-11-20 15:08:58.502396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.502426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.395 [2024-11-20 15:08:58.502446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.502474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.395 [2024-11-20 15:08:58.502493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.502522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.395 [2024-11-20 15:08:58.502541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.502588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.395 [2024-11-20 15:08:58.502611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.502657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.395 [2024-11-20 15:08:58.502681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.502711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.395 [2024-11-20 15:08:58.502731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.502760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.395 [2024-11-20 15:08:58.502780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.502814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.395 [2024-11-20 15:08:58.502835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.502864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.395 [2024-11-20 15:08:58.502883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.502912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.395 [2024-11-20 15:08:58.502931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.502960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.395 [2024-11-20 15:08:58.502980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.503009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.395 [2024-11-20 15:08:58.503028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:09.395 [2024-11-20 15:08:58.503057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.396 [2024-11-20 15:08:58.503076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.503106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.396 [2024-11-20 15:08:58.503125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.503154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.396 [2024-11-20 15:08:58.503173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.503217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.396 [2024-11-20 15:08:58.503254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.503287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.396 [2024-11-20 15:08:58.503307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.503336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.396 [2024-11-20 15:08:58.503356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.503385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.396 [2024-11-20 15:08:58.503404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.503439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.396 [2024-11-20 15:08:58.503458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.503487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.396 [2024-11-20 15:08:58.503507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.503537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.396 [2024-11-20 15:08:58.503557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.503586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.396 [2024-11-20 15:08:58.503605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.503634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.396 [2024-11-20 15:08:58.503671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.503701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.396 [2024-11-20 15:08:58.503721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.503750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.396 [2024-11-20 15:08:58.503769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.503798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.396 [2024-11-20 15:08:58.503817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.503869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.396 [2024-11-20 15:08:58.503904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.503936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.396 [2024-11-20 15:08:58.503956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.503985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.396 [2024-11-20 15:08:58.504005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.504034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.396 [2024-11-20 15:08:58.504053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.504082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.396 [2024-11-20 15:08:58.504101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.504130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.396 [2024-11-20 15:08:58.504149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.504177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.396 [2024-11-20 15:08:58.504197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.504226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.396 [2024-11-20 15:08:58.504245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.504274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.396 [2024-11-20 15:08:58.504293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.504322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.396 [2024-11-20 15:08:58.504341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.504370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.396 [2024-11-20 15:08:58.504389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.504418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.396 [2024-11-20 15:08:58.504437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:09.396 [2024-11-20 15:08:58.504466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.396 [2024-11-20 15:08:58.504486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.504524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.397 [2024-11-20 15:08:58.504545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.504574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.397 [2024-11-20 15:08:58.504594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.504654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.397 [2024-11-20 15:08:58.504681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.504711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.397 [2024-11-20 15:08:58.504731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.504759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.397 [2024-11-20 15:08:58.504779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.504807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.397 [2024-11-20 15:08:58.504826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.504855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.397 [2024-11-20 15:08:58.504874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.504903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.397 [2024-11-20 15:08:58.504922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.504952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.397 [2024-11-20 15:08:58.504971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.505001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.397 [2024-11-20 15:08:58.505021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.505050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.397 [2024-11-20 15:08:58.505070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.505098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.397 [2024-11-20 15:08:58.505118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.505157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.397 [2024-11-20 15:08:58.505178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.505207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.397 [2024-11-20 15:08:58.505226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.505255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.397 [2024-11-20 15:08:58.505274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.505303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.397 [2024-11-20 15:08:58.505322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.505351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.397 [2024-11-20 15:08:58.505370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.505399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.397 [2024-11-20 15:08:58.505418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.505447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.397 [2024-11-20 15:08:58.505466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.505495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.397 [2024-11-20 15:08:58.505514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.505542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.397 [2024-11-20 15:08:58.505562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.505590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.397 [2024-11-20 15:08:58.505610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.505654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.397 [2024-11-20 15:08:58.505678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.505708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.397 [2024-11-20 15:08:58.505727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.505757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.397 [2024-11-20 15:08:58.505787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.505818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.397 [2024-11-20 15:08:58.505838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:09.397 [2024-11-20 15:08:58.505867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.398 [2024-11-20 15:08:58.505886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:09.398 [2024-11-20 15:08:58.505915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.398 [2024-11-20 15:08:58.505934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:09.398 [2024-11-20 15:08:58.505963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.398 [2024-11-20 15:08:58.505982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:09.398 [2024-11-20 15:08:58.506011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.398 [2024-11-20 15:08:58.506030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:09.398 [2024-11-20 15:08:58.506059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.398 [2024-11-20 15:08:58.506078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:09.398 [2024-11-20 15:08:58.506107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.398 [2024-11-20 15:08:58.506127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:09.398 [2024-11-20 15:08:58.506155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.398 [2024-11-20 15:08:58.506174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:09.398 [2024-11-20 15:08:58.506203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.398 [2024-11-20 15:08:58.506222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:09.398 [2024-11-20 15:08:58.506251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.398 [2024-11-20 15:08:58.506270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:09.398 [2024-11-20 15:08:58.506299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.398 [2024-11-20 15:08:58.506318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:09.398 [2024-11-20 15:08:58.506347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.398 [2024-11-20 15:08:58.506375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:09.398 [2024-11-20 15:08:58.506406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.398 [2024-11-20 15:08:58.506425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:09.398 [2024-11-20 15:08:58.506454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.398 [2024-11-20 15:08:58.506474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:09.398 [2024-11-20 15:08:58.506503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.398 [2024-11-20 15:08:58.506522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:09.398 [2024-11-20 15:08:58.506555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.398 [2024-11-20 15:08:58.506576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:09.398 [2024-11-20 15:08:58.506605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.398 [2024-11-20 15:08:58.506625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:09.398 [2024-11-20 15:08:58.506669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.398 [2024-11-20 15:08:58.506691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:09.398 [2024-11-20 15:08:58.506720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.398 [2024-11-20 15:08:58.506739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:09.398 [2024-11-20 15:08:58.506793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.398 [2024-11-20 15:08:58.506817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:09.398 [2024-11-20 15:08:58.506847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.398 [2024-11-20 15:08:58.506866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:09.398 [2024-11-20 15:08:58.506895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.398 [2024-11-20 15:08:58.506915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:09.398 [2024-11-20 15:08:58.506943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.398 [2024-11-20 15:08:58.506963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:09.398 [2024-11-20 15:08:58.506992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.398 [2024-11-20 15:08:58.507010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:09.398 [2024-11-20 15:08:58.507051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.398 [2024-11-20 15:08:58.507071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:09.398 [2024-11-20 15:08:58.507101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.398 [2024-11-20 15:08:58.507120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:09.398 [2024-11-20 15:08:58.507149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.398 [2024-11-20 15:08:58.507169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:09.398 [2024-11-20 15:08:58.507209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.398 [2024-11-20 15:08:58.507233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:09.399 [2024-11-20 15:08:58.507263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.399 [2024-11-20 15:08:58.507289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:09.399 [2024-11-20 15:08:58.507318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.399 [2024-11-20 15:08:58.507338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:09.399 [2024-11-20 15:08:58.507366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.399 [2024-11-20 15:08:58.507386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:09.399 [2024-11-20 15:08:58.507416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.399 [2024-11-20 15:08:58.507435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:09.399 [2024-11-20 15:08:58.507464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.399 [2024-11-20 15:08:58.507483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:09.399 [2024-11-20 15:08:58.507512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.399 [2024-11-20 15:08:58.507531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:09.399 [2024-11-20 15:08:58.507560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.399 [2024-11-20 15:08:58.507579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:09.399 [2024-11-20 15:08:58.507608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.399 [2024-11-20 15:08:58.507627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:09.399 [2024-11-20 15:08:58.507683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.399 [2024-11-20 15:08:58.507705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:09.399 [2024-11-20 15:08:58.507734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.399 [2024-11-20 15:08:58.507754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:09.399 [2024-11-20 15:08:58.507782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.399 [2024-11-20 15:08:58.507807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:09.399 [2024-11-20 15:08:58.507836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.399 [2024-11-20 15:08:58.507856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:09.399 [2024-11-20 15:08:58.507884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.399 [2024-11-20 15:08:58.507903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:09.399 [2024-11-20 15:08:58.507932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.399 [2024-11-20 15:08:58.507951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:09.399 [2024-11-20 15:08:58.507980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.399 [2024-11-20 15:08:58.507999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:09.399 [2024-11-20 15:08:58.508027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.399 [2024-11-20 15:08:58.508046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:09.399 [2024-11-20 15:08:58.508075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.399 [2024-11-20 15:08:58.508094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:09.399 [2024-11-20 15:08:58.508124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.399 [2024-11-20 15:08:58.508143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:09.399 [2024-11-20 15:08:58.508171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.399 [2024-11-20 15:08:58.508190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:09.399 [2024-11-20 15:08:58.508220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.399 [2024-11-20 15:08:58.508239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:09.399 [2024-11-20 15:08:58.508268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.399 [2024-11-20 15:08:58.508296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:09.399 [2024-11-20 15:08:58.508326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.399 [2024-11-20 15:08:58.508346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:09.399 [2024-11-20 15:08:58.508375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.399 [2024-11-20 15:08:58.508395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:09.399 [2024-11-20 15:08:58.510275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.400 [2024-11-20 15:08:58.510315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:09.400 [2024-11-20 15:08:58.510380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.400 [2024-11-20 15:08:58.510423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:09.400 [2024-11-20 15:08:58.510475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.400 [2024-11-20 15:08:58.510514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:09.400 [2024-11-20 15:08:58.510566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.400 [2024-11-20 15:08:58.510603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:09.400 [2024-11-20 15:08:58.510654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.400 [2024-11-20 15:08:58.510679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:09.400 [2024-11-20 15:08:58.510714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.400 [2024-11-20 15:08:58.510734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:09.400 [2024-11-20 15:08:58.510763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.400 [2024-11-20 15:08:58.510782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:09.400 [2024-11-20 15:08:58.510811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.400 [2024-11-20 15:08:58.510831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:09.400 [2024-11-20 15:08:58.510867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.400 [2024-11-20 15:08:58.510904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:09.400 [2024-11-20 15:08:58.510954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.400 [2024-11-20 15:08:58.511019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:09.400 [2024-11-20 15:08:58.511076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.400 [2024-11-20 15:08:58.511101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:09.400 [2024-11-20 15:08:58.511131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.400 [2024-11-20 15:08:58.511151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:09.400 [2024-11-20 15:08:58.511180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.400 [2024-11-20 15:08:58.511215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:09.400 [2024-11-20 15:08:58.511259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.400 [2024-11-20 15:08:58.511274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:09.400 [2024-11-20 15:08:58.511303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.400 [2024-11-20 15:08:58.511331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:09.400 [2024-11-20 15:08:58.511369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.400 [2024-11-20 15:08:58.511389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:09.400 [2024-11-20 15:08:58.511432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.400 [2024-11-20 15:08:58.511452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:09.400 [2024-11-20 15:08:58.511475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.400 [2024-11-20 15:08:58.511490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:09.400 [2024-11-20 15:08:58.511512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.400 [2024-11-20 15:08:58.511527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:09.400 [2024-11-20 15:08:58.511549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.400 [2024-11-20 15:08:58.511563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:09.400 [2024-11-20 15:08:58.511585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.400 [2024-11-20 15:08:58.511599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:09.400 [2024-11-20 15:08:58.511621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.400 [2024-11-20 15:08:58.511650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:09.400 [2024-11-20 15:08:58.511692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.400 [2024-11-20 15:08:58.511709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:09.400 [2024-11-20 15:08:58.511731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.400 [2024-11-20 15:08:58.511746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:09.400 [2024-11-20 15:08:58.511768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.400 [2024-11-20 15:08:58.511782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:09.400 [2024-11-20 15:08:58.511804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.400 [2024-11-20 15:08:58.511818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:09.400 [2024-11-20 15:08:58.511841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.401 [2024-11-20 15:08:58.511856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:09.401 [2024-11-20 15:08:58.511878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.401 [2024-11-20 15:08:58.511893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:09.401 [2024-11-20 15:08:58.511915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.401 [2024-11-20 15:08:58.511930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:09.401 [2024-11-20 15:08:58.511952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.401 [2024-11-20 15:08:58.511967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:09.401 [2024-11-20 15:08:58.511988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.401 [2024-11-20 15:08:58.512013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:09.401 [2024-11-20 15:08:58.512051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.401 [2024-11-20 15:08:58.512081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:09.401 [2024-11-20 15:08:58.512123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.401 [2024-11-20 15:08:58.512154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:09.401 [2024-11-20 15:08:58.512194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.401 [2024-11-20 15:08:58.512223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:09.401 [2024-11-20 15:08:58.512280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.401 [2024-11-20 15:08:58.512313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:09.401 [2024-11-20 15:08:58.512357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.401 [2024-11-20 15:08:58.512375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:09.401 [2024-11-20 15:08:58.512398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.401 [2024-11-20 15:08:58.512413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:09.401 [2024-11-20 15:08:58.512454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.401 [2024-11-20 15:08:58.512473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:09.401 [2024-11-20 15:08:58.512496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.401 [2024-11-20 15:08:58.512512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:09.401 [2024-11-20 15:08:58.512534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.401 [2024-11-20 15:08:58.512548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:09.401 [2024-11-20 15:08:58.512570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.401 [2024-11-20 15:08:58.512585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:09.401 [2024-11-20 15:08:58.512607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.401 [2024-11-20 15:08:58.512621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:09.401 [2024-11-20 15:08:58.512660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.401 [2024-11-20 15:08:58.512679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:09.401 [2024-11-20 15:08:58.512701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.401 [2024-11-20 15:08:58.512720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:09.401 [2024-11-20 15:08:58.512743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.401 [2024-11-20 15:08:58.512758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:09.401 [2024-11-20 15:08:58.512780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.401 [2024-11-20 15:08:58.512795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:09.401 [2024-11-20 15:08:58.512828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.401 [2024-11-20 15:08:58.512845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:09.401 [2024-11-20 15:08:58.512867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.401 [2024-11-20 15:08:58.512882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:09.401 [2024-11-20 15:08:58.512908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.401 [2024-11-20 15:08:58.512924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:09.401 [2024-11-20 15:08:58.512945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.401 [2024-11-20 15:08:58.512960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:09.401 [2024-11-20 15:08:58.512982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.401 [2024-11-20 15:08:58.512996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:09.401 [2024-11-20 15:08:58.513018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.401 [2024-11-20 15:08:58.513033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.401 [2024-11-20 15:08:58.513055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.401 [2024-11-20 15:08:58.513070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.513091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.402 [2024-11-20 15:08:58.513106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.513128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.402 [2024-11-20 15:08:58.513143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.513164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.402 [2024-11-20 15:08:58.513179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.513201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.402 [2024-11-20 15:08:58.513216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.513237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.402 [2024-11-20 15:08:58.513252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.513274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.402 [2024-11-20 15:08:58.513296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.513319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.402 [2024-11-20 15:08:58.513337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.513360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.402 [2024-11-20 15:08:58.513375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.513397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.402 [2024-11-20 15:08:58.513411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.513443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.402 [2024-11-20 15:08:58.513458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.513480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.402 [2024-11-20 15:08:58.513494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.513516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.402 [2024-11-20 15:08:58.513531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.513553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.402 [2024-11-20 15:08:58.513567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.513589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.402 [2024-11-20 15:08:58.513604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.513626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.402 [2024-11-20 15:08:58.513655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.513680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.402 [2024-11-20 15:08:58.513695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.513717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.402 [2024-11-20 15:08:58.513732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.513753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.402 [2024-11-20 15:08:58.513785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.513811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.402 [2024-11-20 15:08:58.513826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.513848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:23560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.402 [2024-11-20 15:08:58.513863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.513885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.402 [2024-11-20 15:08:58.513900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.513936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.402 [2024-11-20 15:08:58.513955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.513978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.402 [2024-11-20 15:08:58.513997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.514020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.402 [2024-11-20 15:08:58.514035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.514057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.402 [2024-11-20 15:08:58.514072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.514094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.402 [2024-11-20 15:08:58.514109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:09.402 [2024-11-20 15:08:58.514131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.402 [2024-11-20 15:08:58.522717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.522792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.403 [2024-11-20 15:08:58.522827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.522862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.403 [2024-11-20 15:08:58.522884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.522916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.403 [2024-11-20 15:08:58.522937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.522999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.403 [2024-11-20 15:08:58.523023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.523055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.403 [2024-11-20 15:08:58.523076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.523107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:23600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.403 [2024-11-20 15:08:58.523127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.523158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.403 [2024-11-20 15:08:58.523178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.523236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.403 [2024-11-20 15:08:58.523278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.523324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.403 [2024-11-20 15:08:58.523358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.523407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.403 [2024-11-20 15:08:58.523440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.523481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.403 [2024-11-20 15:08:58.523504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.523536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.403 [2024-11-20 15:08:58.523557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.523588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.403 [2024-11-20 15:08:58.523608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.523657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.403 [2024-11-20 15:08:58.523681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.523722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.403 [2024-11-20 15:08:58.523743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.523787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.403 [2024-11-20 15:08:58.523811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.523841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.403 [2024-11-20 15:08:58.523862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.523892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.403 [2024-11-20 15:08:58.523913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.523944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.403 [2024-11-20 15:08:58.523964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.523995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.403 [2024-11-20 15:08:58.524015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.524045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:23720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.403 [2024-11-20 15:08:58.524066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.524096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.403 [2024-11-20 15:08:58.524116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.524146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.403 [2024-11-20 15:08:58.524167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.524197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.403 [2024-11-20 15:08:58.524218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.524249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.403 [2024-11-20 15:08:58.524269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.524308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.403 [2024-11-20 15:08:58.524336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:09.403 [2024-11-20 15:08:58.524367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.404 [2024-11-20 15:08:58.524387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:08:58.524418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.404 [2024-11-20 15:08:58.524450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:08:58.524482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.404 [2024-11-20 15:08:58.524502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:08:58.524533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.404 [2024-11-20 15:08:58.524554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:08:58.524584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.404 [2024-11-20 15:08:58.524605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:08:58.524661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.404 [2024-11-20 15:08:58.524688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:08:58.524720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.404 [2024-11-20 15:08:58.524740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:08:58.524770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.404 [2024-11-20 15:08:58.524791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:08:58.524821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.404 [2024-11-20 15:08:58.524841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:08:58.524872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.404 [2024-11-20 15:08:58.524892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:08:58.524922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.404 [2024-11-20 15:08:58.524942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:08:58.524972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.404 [2024-11-20 15:08:58.524993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:08:58.525023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.404 [2024-11-20 15:08:58.525043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:08:58.525073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.404 [2024-11-20 15:08:58.525118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:08:58.525157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.404 [2024-11-20 15:08:58.525179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:08:58.525209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.404 [2024-11-20 15:08:58.525229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:08:58.525259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.404 [2024-11-20 15:08:58.525279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:08:58.525310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.404 [2024-11-20 15:08:58.525330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:08:58.525360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.404 [2024-11-20 15:08:58.525380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:08:58.525410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.404 [2024-11-20 15:08:58.525431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:08:58.525461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.404 [2024-11-20 15:08:58.525481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:08:58.526094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.404 [2024-11-20 15:08:58.526145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:09:05.544923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.404 [2024-11-20 15:09:05.544994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:09:05.545052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.404 [2024-11-20 15:09:05.545074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:09:05.545098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.404 [2024-11-20 15:09:05.545113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:09:05.545136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.404 [2024-11-20 15:09:05.545152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:09:05.545206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.404 [2024-11-20 15:09:05.545223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:09:05.545245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.404 [2024-11-20 15:09:05.545260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:09.404 [2024-11-20 15:09:05.545282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.405 [2024-11-20 15:09:05.545297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:09.405 [2024-11-20 15:09:05.545319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.405 [2024-11-20 15:09:05.545334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:09.405 [2024-11-20 15:09:05.545357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.405 [2024-11-20 15:09:05.545372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:09.405 [2024-11-20 15:09:05.545394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.405 [2024-11-20 15:09:05.545409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:09.405 [2024-11-20 15:09:05.545430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.405 [2024-11-20 15:09:05.545445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:09.405 [2024-11-20 15:09:05.545467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.405 [2024-11-20 15:09:05.545485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:09.405 [2024-11-20 15:09:05.545520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.405 [2024-11-20 15:09:05.545549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:09.405 [2024-11-20 15:09:05.545589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.405 [2024-11-20 15:09:05.545617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:09.405 [2024-11-20 15:09:05.545675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.405 [2024-11-20 15:09:05.545715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:09.405 [2024-11-20 15:09:05.545753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.405 [2024-11-20 15:09:05.545781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:09.405 [2024-11-20 15:09:05.545835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.405 [2024-11-20 15:09:05.545868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:09.405 [2024-11-20 15:09:05.545907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.405 [2024-11-20 15:09:05.545936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:09.405 [2024-11-20 15:09:05.545972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.405 [2024-11-20 15:09:05.545998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:09.405 [2024-11-20 15:09:05.546044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.405 [2024-11-20 15:09:05.546071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:09.405 [2024-11-20 15:09:05.546106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.405 [2024-11-20 15:09:05.546133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:09.405 [2024-11-20 15:09:05.546168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.405 [2024-11-20 15:09:05.546195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:09.405 [2024-11-20 15:09:05.546232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.405 [2024-11-20 15:09:05.546261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:09.405 [2024-11-20 15:09:05.546299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.405 [2024-11-20 15:09:05.546334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:09.405 [2024-11-20 15:09:05.546368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.405 [2024-11-20 15:09:05.546394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:09.405 [2024-11-20 15:09:05.546432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.405 [2024-11-20 15:09:05.546462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:09.405 [2024-11-20 15:09:05.546497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.405 [2024-11-20 15:09:05.546527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:09.405 [2024-11-20 15:09:05.546565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.405 [2024-11-20 15:09:05.546594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:09.405 [2024-11-20 15:09:05.546634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.406 [2024-11-20 15:09:05.546703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.546746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.406 [2024-11-20 15:09:05.546778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.546815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.406 [2024-11-20 15:09:05.546844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.546890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.406 [2024-11-20 15:09:05.546917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.546950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.406 [2024-11-20 15:09:05.546977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.547016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.406 [2024-11-20 15:09:05.547043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.547080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.406 [2024-11-20 15:09:05.547109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.547148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.406 [2024-11-20 15:09:05.547178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.547234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.406 [2024-11-20 15:09:05.547267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.547309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.406 [2024-11-20 15:09:05.547341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.547381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.406 [2024-11-20 15:09:05.547415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.547455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.406 [2024-11-20 15:09:05.547485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.547526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.406 [2024-11-20 15:09:05.547573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.547612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.406 [2024-11-20 15:09:05.547658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.547701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.406 [2024-11-20 15:09:05.547727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.547764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.406 [2024-11-20 15:09:05.547793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.547830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.406 [2024-11-20 15:09:05.547856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.547890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.406 [2024-11-20 15:09:05.547916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.547952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.406 [2024-11-20 15:09:05.547982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.548030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.406 [2024-11-20 15:09:05.548059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.548105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.406 [2024-11-20 15:09:05.548133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.548170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.406 [2024-11-20 15:09:05.548197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.548235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.406 [2024-11-20 15:09:05.548264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.548298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.406 [2024-11-20 15:09:05.548323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.548359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.406 [2024-11-20 15:09:05.548386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.548441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.406 [2024-11-20 15:09:05.548461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.548486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.406 [2024-11-20 15:09:05.548513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:09.406 [2024-11-20 15:09:05.548548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.406 [2024-11-20 15:09:05.548577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.548599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.407 [2024-11-20 15:09:05.548615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.548655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.407 [2024-11-20 15:09:05.548674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.548697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.407 [2024-11-20 15:09:05.548713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.548735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.407 [2024-11-20 15:09:05.548750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.548772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.407 [2024-11-20 15:09:05.548788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.548810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.407 [2024-11-20 15:09:05.548825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.548848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.407 [2024-11-20 15:09:05.548864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.548886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.407 [2024-11-20 15:09:05.548901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.548923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.407 [2024-11-20 15:09:05.548938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.548972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.407 [2024-11-20 15:09:05.548988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.549011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.407 [2024-11-20 15:09:05.549026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.549049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.407 [2024-11-20 15:09:05.549064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.549092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.407 [2024-11-20 15:09:05.549107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.549130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:84488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.407 [2024-11-20 15:09:05.549145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.549167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.407 [2024-11-20 15:09:05.549182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.549204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.407 [2024-11-20 15:09:05.549220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.549242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.407 [2024-11-20 15:09:05.549257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.549279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.407 [2024-11-20 15:09:05.549294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.549316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.407 [2024-11-20 15:09:05.549331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.549353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.407 [2024-11-20 15:09:05.549368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.549390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.407 [2024-11-20 15:09:05.549405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.549438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.407 [2024-11-20 15:09:05.549455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.549478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.407 [2024-11-20 15:09:05.549505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.549538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.407 [2024-11-20 15:09:05.549555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.549578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.407 [2024-11-20 15:09:05.549593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.549615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.407 [2024-11-20 15:09:05.549650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:09.407 [2024-11-20 15:09:05.549678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.408 [2024-11-20 15:09:05.549694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.549717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.408 [2024-11-20 15:09:05.549732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.549754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.408 [2024-11-20 15:09:05.549769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.549898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.408 [2024-11-20 15:09:05.549923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.550217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.408 [2024-11-20 15:09:05.550248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.550272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.408 [2024-11-20 15:09:05.550289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.550312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.408 [2024-11-20 15:09:05.550328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.550350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.408 [2024-11-20 15:09:05.550378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.550402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.408 [2024-11-20 15:09:05.550418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.550440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.408 [2024-11-20 15:09:05.550455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.550478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.408 [2024-11-20 15:09:05.550505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.550540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.408 [2024-11-20 15:09:05.550557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.550580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.408 [2024-11-20 15:09:05.550595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.550617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.408 [2024-11-20 15:09:05.550632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.550674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:84640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.408 [2024-11-20 15:09:05.550691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.550713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:84648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.408 [2024-11-20 15:09:05.550735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.550757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.408 [2024-11-20 15:09:05.550772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.550794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.408 [2024-11-20 15:09:05.550809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.550831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.408 [2024-11-20 15:09:05.550846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.550868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.408 [2024-11-20 15:09:05.550893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.550917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.408 [2024-11-20 15:09:05.550933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.550955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.408 [2024-11-20 15:09:05.550970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.550992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.408 [2024-11-20 15:09:05.551007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.551029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.408 [2024-11-20 15:09:05.551044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.551067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.408 [2024-11-20 15:09:05.551082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.552132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.408 [2024-11-20 15:09:05.552184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.552244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.408 [2024-11-20 15:09:05.552279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.552329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.408 [2024-11-20 15:09:05.552363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.552414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.408 [2024-11-20 15:09:05.552443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:09.408 [2024-11-20 15:09:05.552490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.409 [2024-11-20 15:09:05.552521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:09.409 [2024-11-20 15:09:05.552570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.409 [2024-11-20 15:09:05.552600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:09.409 [2024-11-20 15:09:05.552673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.409 [2024-11-20 15:09:05.552709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:09.409 [2024-11-20 15:09:05.552783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.409 [2024-11-20 15:09:05.552817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:09.409 [2024-11-20 15:09:05.552868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.409 [2024-11-20 15:09:05.552900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:09.409 [2024-11-20 15:09:05.552952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.409 [2024-11-20 15:09:05.552984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:09.409 [2024-11-20 15:09:05.553031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:84744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.409 [2024-11-20 15:09:05.553054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:09.409 [2024-11-20 15:09:05.553088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.409 [2024-11-20 15:09:05.553104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:09.409 [2024-11-20 15:09:05.553135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.409 [2024-11-20 15:09:05.553151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:09.409 [2024-11-20 15:09:05.553181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.409 [2024-11-20 15:09:05.553196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:09.409 [2024-11-20 15:09:05.553226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.409 [2024-11-20 15:09:05.553241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:09.409 [2024-11-20 15:09:05.553272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.409 [2024-11-20 15:09:05.553287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:09.409 [2024-11-20 15:09:05.553318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.409 [2024-11-20 15:09:05.553333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:09.409 [2024-11-20 15:09:05.553364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.409 [2024-11-20 15:09:05.553380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:09.409 [2024-11-20 15:09:05.553432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.409 [2024-11-20 15:09:05.553451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:09.409 [2024-11-20 15:09:05.553501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.409 [2024-11-20 15:09:05.553532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:09.409 [2024-11-20 15:09:05.553570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.409 [2024-11-20 15:09:05.553586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:09.409 [2024-11-20 15:09:19.061012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.409 [2024-11-20 15:09:19.061058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.409 [2024-11-20 15:09:19.061087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.409 [2024-11-20 15:09:19.061103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.409 [2024-11-20 15:09:19.061120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.409 [2024-11-20 15:09:19.061137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.409 [2024-11-20 15:09:19.061153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.409 [2024-11-20 15:09:19.061167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.409 [2024-11-20 15:09:19.061182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:40288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.409 [2024-11-20 15:09:19.061196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.409 [2024-11-20 15:09:19.061212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:40296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.409 [2024-11-20 15:09:19.061225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.409 [2024-11-20 15:09:19.061241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.409 [2024-11-20 15:09:19.061254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.409 [2024-11-20 15:09:19.061269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.409 [2024-11-20 15:09:19.061283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.409 [2024-11-20 15:09:19.061299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:40320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.409 [2024-11-20 15:09:19.061313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.409 [2024-11-20 15:09:19.061328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.410 [2024-11-20 15:09:19.061342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.061357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.410 [2024-11-20 15:09:19.061388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.061407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:41008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.410 [2024-11-20 15:09:19.061421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.061437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:41016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.410 [2024-11-20 15:09:19.061452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.061468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.410 [2024-11-20 15:09:19.061481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.061497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.410 [2024-11-20 15:09:19.061510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.061525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.410 [2024-11-20 15:09:19.061539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.061554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.410 [2024-11-20 15:09:19.061568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.061583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.410 [2024-11-20 15:09:19.061597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.061612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.410 [2024-11-20 15:09:19.061626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.061655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.410 [2024-11-20 15:09:19.061673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.061689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.410 [2024-11-20 15:09:19.061702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.061718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.410 [2024-11-20 15:09:19.061731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.061746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.410 [2024-11-20 15:09:19.061760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.061775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.410 [2024-11-20 15:09:19.061798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.061815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.410 [2024-11-20 15:09:19.061829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.061845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:40464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.410 [2024-11-20 15:09:19.061859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.061874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.410 [2024-11-20 15:09:19.061889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.061905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.410 [2024-11-20 15:09:19.061918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.061934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.410 [2024-11-20 15:09:19.061948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.061963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.410 [2024-11-20 15:09:19.061977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.061992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.410 [2024-11-20 15:09:19.062006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.062022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.410 [2024-11-20 15:09:19.062035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.062051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.410 [2024-11-20 15:09:19.062065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.062081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.410 [2024-11-20 15:09:19.062094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.062110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.410 [2024-11-20 15:09:19.062124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.062139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.410 [2024-11-20 15:09:19.062153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.410 [2024-11-20 15:09:19.062175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.411 [2024-11-20 15:09:19.062190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.062205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.411 [2024-11-20 15:09:19.062219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.062234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.411 [2024-11-20 15:09:19.062248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.062263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.411 [2024-11-20 15:09:19.062277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.062292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.411 [2024-11-20 15:09:19.062306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.062322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:40480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.411 [2024-11-20 15:09:19.062336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.062352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.411 [2024-11-20 15:09:19.062366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.062381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.411 [2024-11-20 15:09:19.062395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.062411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:40528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.411 [2024-11-20 15:09:19.062425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.062440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:40536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.411 [2024-11-20 15:09:19.062454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.062470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.411 [2024-11-20 15:09:19.062484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.062500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.411 [2024-11-20 15:09:19.062513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.062529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.411 [2024-11-20 15:09:19.062550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.062567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.411 [2024-11-20 15:09:19.062582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.062608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.411 [2024-11-20 15:09:19.062622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.062649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.411 [2024-11-20 15:09:19.062666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.062682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.411 [2024-11-20 15:09:19.062696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.062711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.411 [2024-11-20 15:09:19.062725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.062740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.411 [2024-11-20 15:09:19.062754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.062769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.411 [2024-11-20 15:09:19.062783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.062799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.411 [2024-11-20 15:09:19.062813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.062828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.411 [2024-11-20 15:09:19.062842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.062857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.411 [2024-11-20 15:09:19.062871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.062886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.411 [2024-11-20 15:09:19.062900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.062915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.411 [2024-11-20 15:09:19.062929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.062952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.411 [2024-11-20 15:09:19.062966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.062982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.411 [2024-11-20 15:09:19.062996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.411 [2024-11-20 15:09:19.063011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:40592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.412 [2024-11-20 15:09:19.063025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.412 [2024-11-20 15:09:19.063041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.412 [2024-11-20 15:09:19.063055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.412 [2024-11-20 15:09:19.063071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.412 [2024-11-20 15:09:19.063085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.412 [2024-11-20 15:09:19.063101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.412 [2024-11-20 15:09:19.063115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.412 [2024-11-20 15:09:19.063130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.412 [2024-11-20 15:09:19.063144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.412 [2024-11-20 15:09:19.063160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.412 [2024-11-20 15:09:19.063174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.412 [2024-11-20 15:09:19.063189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:40640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.412 [2024-11-20 15:09:19.063217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.412 [2024-11-20 15:09:19.063235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:40648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.412 [2024-11-20 15:09:19.063249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.412 [2024-11-20 15:09:19.063265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.412 [2024-11-20 15:09:19.063279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.412 [2024-11-20 15:09:19.063294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:40672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.412 [2024-11-20 15:09:19.063308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.412 [2024-11-20 15:09:19.063324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.412 [2024-11-20 15:09:19.063344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.412 [2024-11-20 15:09:19.063360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:40704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.412 [2024-11-20 15:09:19.063375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.412 [2024-11-20 15:09:19.063390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.412 [2024-11-20 15:09:19.063404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.412 [2024-11-20 15:09:19.063420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.412 [2024-11-20 15:09:19.063434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.412 [2024-11-20 15:09:19.063449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.412 [2024-11-20 15:09:19.063463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.412 [2024-11-20 15:09:19.063479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.412 [2024-11-20 15:09:19.063493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.412 [2024-11-20 15:09:19.063508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.412 [2024-11-20 15:09:19.063522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.412 [2024-11-20 15:09:19.063542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.412 [2024-11-20 15:09:19.063557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.412 [2024-11-20 15:09:19.063573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.412 [2024-11-20 15:09:19.063587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.412 [2024-11-20 15:09:19.063602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.412 [2024-11-20 15:09:19.063616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.412 [2024-11-20 15:09:19.063632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:41328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.412 [2024-11-20 15:09:19.063658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.412 [2024-11-20 15:09:19.063674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:41336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.412 [2024-11-20 15:09:19.063688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.412 [2024-11-20 15:09:19.063704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.412 [2024-11-20 15:09:19.063718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.063744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.413 [2024-11-20 15:09:19.063765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.063920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.413 [2024-11-20 15:09:19.063946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.063964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.413 [2024-11-20 15:09:19.063979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.063994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.413 [2024-11-20 15:09:19.064009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.064024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.413 [2024-11-20 15:09:19.064038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.064054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.413 [2024-11-20 15:09:19.064068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.064083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.413 [2024-11-20 15:09:19.064097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.064113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.413 [2024-11-20 15:09:19.064127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.064142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.413 [2024-11-20 15:09:19.064156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.064171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.413 [2024-11-20 15:09:19.064185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.064203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:41432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.413 [2024-11-20 15:09:19.064218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.064234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.413 [2024-11-20 15:09:19.064248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.064263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.413 [2024-11-20 15:09:19.064278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.064303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.413 [2024-11-20 15:09:19.064318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.064334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.413 [2024-11-20 15:09:19.064348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.064363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.413 [2024-11-20 15:09:19.064378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.064393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.413 [2024-11-20 15:09:19.064407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.064423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:40872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.413 [2024-11-20 15:09:19.064437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.064452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.413 [2024-11-20 15:09:19.064466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.064482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.413 [2024-11-20 15:09:19.064496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.064511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.413 [2024-11-20 15:09:19.064525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.064540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.413 [2024-11-20 15:09:19.064555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.064570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.413 [2024-11-20 15:09:19.064584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.064599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.413 [2024-11-20 15:09:19.064613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.064629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.413 [2024-11-20 15:09:19.064656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.064674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.413 [2024-11-20 15:09:19.064695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.064714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.413 [2024-11-20 15:09:19.064730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.413 [2024-11-20 15:09:19.064746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.414 [2024-11-20 15:09:19.064760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.414 [2024-11-20 15:09:19.064775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.414 [2024-11-20 15:09:19.064789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.414 [2024-11-20 15:09:19.064804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.414 [2024-11-20 15:09:19.064818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.414 [2024-11-20 15:09:19.064834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.414 [2024-11-20 15:09:19.064848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.414 [2024-11-20 15:09:19.064864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.414 [2024-11-20 15:09:19.064878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.414 [2024-11-20 15:09:19.064894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:41544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.414 [2024-11-20 15:09:19.064908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.414 [2024-11-20 15:09:19.064923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:09.414 [2024-11-20 15:09:19.064937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.414 [2024-11-20 15:09:19.064953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.414 [2024-11-20 15:09:19.064967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.414 [2024-11-20 15:09:19.064982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.414 [2024-11-20 15:09:19.064996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.414 [2024-11-20 15:09:19.065012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.414 [2024-11-20 15:09:19.065026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.414 [2024-11-20 15:09:19.065042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.414 [2024-11-20 15:09:19.065055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.414 [2024-11-20 15:09:19.065077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.414 [2024-11-20 15:09:19.065092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.414 [2024-11-20 15:09:19.065108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.414 [2024-11-20 15:09:19.065122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.414 [2024-11-20 15:09:19.065137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:09.414 [2024-11-20 15:09:19.065151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.414 [2024-11-20 15:09:19.065166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17e0100 is same with the state(5) to be set 00:19:09.414 [2024-11-20 15:09:19.065184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:09.414 [2024-11-20 15:09:19.065197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:09.414 [2024-11-20 15:09:19.065210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41000 len:8 PRP1 0x0 PRP2 0x0 00:19:09.414 [2024-11-20 15:09:19.065224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.414 [2024-11-20 15:09:19.065271] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17e0100 was disconnected and freed. reset controller. 00:19:09.414 [2024-11-20 15:09:19.065395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.414 [2024-11-20 15:09:19.065425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.414 [2024-11-20 15:09:19.065442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.414 [2024-11-20 15:09:19.065455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.414 [2024-11-20 15:09:19.065470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.414 [2024-11-20 15:09:19.065483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.414 [2024-11-20 15:09:19.065497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:09.414 [2024-11-20 15:09:19.065510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:09.414 [2024-11-20 15:09:19.065523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ef3c0 is same with the state(5) to be set 00:19:09.414 [2024-11-20 15:09:19.066611] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:09.414 [2024-11-20 15:09:19.066674] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ef3c0 (9): Bad file descriptor 00:19:09.414 [2024-11-20 15:09:19.067019] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:09.414 [2024-11-20 15:09:19.067108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:09.414 [2024-11-20 15:09:19.067160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:09.414 [2024-11-20 15:09:19.067183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17ef3c0 with addr=10.0.0.2, port=4421 00:19:09.414 [2024-11-20 15:09:19.067210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ef3c0 is same with the state(5) to be set 00:19:09.414 [2024-11-20 15:09:19.067265] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ef3c0 (9): Bad file descriptor 00:19:09.414 [2024-11-20 15:09:19.067298] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:09.414 [2024-11-20 15:09:19.067315] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:09.414 [2024-11-20 15:09:19.067330] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:09.414 [2024-11-20 15:09:19.067368] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:09.414 [2024-11-20 15:09:19.067386] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:09.414 [2024-11-20 15:09:29.113539] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:09.414 Received shutdown signal, test time was about 55.720343 seconds 00:19:09.414 00:19:09.415 Latency(us) 00:19:09.415 [2024-11-20T15:09:40.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.415 [2024-11-20T15:09:40.219Z] Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:09.415 Verification LBA range: start 0x0 length 0x4000 00:19:09.415 Nvme0n1 : 55.72 10286.65 40.18 0.00 0.00 12425.12 297.89 7046430.72 00:19:09.415 [2024-11-20T15:09:40.219Z] =================================================================================================================== 00:19:09.415 [2024-11-20T15:09:40.219Z] Total : 10286.65 40.18 0.00 0.00 12425.12 297.89 7046430.72 00:19:09.415 15:09:39 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:09.415 15:09:39 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:19:09.415 15:09:39 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:09.415 15:09:39 -- host/multipath.sh@125 -- # nvmftestfini 00:19:09.415 15:09:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:09.415 15:09:39 -- nvmf/common.sh@116 -- # sync 00:19:09.415 15:09:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:09.415 15:09:39 -- nvmf/common.sh@119 -- # set +e 00:19:09.415 15:09:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:09.415 15:09:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:09.415 rmmod nvme_tcp 00:19:09.415 rmmod nvme_fabrics 00:19:09.415 rmmod nvme_keyring 00:19:09.415 15:09:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:09.415 15:09:39 -- nvmf/common.sh@123 -- # set -e 00:19:09.415 15:09:39 -- nvmf/common.sh@124 -- # return 0 00:19:09.415 15:09:39 -- nvmf/common.sh@477 -- # '[' -n 84215 ']' 00:19:09.415 15:09:39 -- nvmf/common.sh@478 -- # killprocess 84215 00:19:09.415 15:09:39 -- common/autotest_common.sh@936 -- # '[' -z 84215 ']' 00:19:09.415 15:09:39 -- common/autotest_common.sh@940 -- # kill -0 84215 00:19:09.415 15:09:39 -- common/autotest_common.sh@941 -- # uname 00:19:09.415 15:09:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:09.415 15:09:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84215 00:19:09.415 15:09:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:09.415 15:09:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:09.415 killing process with pid 84215 00:19:09.415 15:09:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84215' 00:19:09.415 15:09:39 -- common/autotest_common.sh@955 -- # kill 84215 00:19:09.415 15:09:39 -- common/autotest_common.sh@960 -- # wait 84215 00:19:09.415 15:09:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:09.415 15:09:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:09.415 15:09:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:09.415 15:09:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:09.415 15:09:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:09.415 15:09:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.415 15:09:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:09.415 15:09:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.415 15:09:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:09.415 00:19:09.415 real 1m1.881s 00:19:09.415 user 2m51.688s 00:19:09.415 sys 0m18.619s 00:19:09.415 15:09:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:09.415 15:09:40 -- common/autotest_common.sh@10 -- # set +x 00:19:09.415 ************************************ 00:19:09.415 END TEST nvmf_multipath 00:19:09.415 ************************************ 00:19:09.415 15:09:40 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:09.415 15:09:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:09.415 15:09:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:09.415 15:09:40 -- common/autotest_common.sh@10 -- # set +x 00:19:09.415 ************************************ 00:19:09.415 START TEST nvmf_timeout 00:19:09.415 ************************************ 00:19:09.415 15:09:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:19:09.674 * Looking for test storage... 00:19:09.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:09.674 15:09:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:09.674 15:09:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:09.674 15:09:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:09.674 15:09:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:09.674 15:09:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:09.674 15:09:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:09.674 15:09:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:09.674 15:09:40 -- scripts/common.sh@335 -- # IFS=.-: 00:19:09.674 15:09:40 -- scripts/common.sh@335 -- # read -ra ver1 00:19:09.674 15:09:40 -- scripts/common.sh@336 -- # IFS=.-: 00:19:09.674 15:09:40 -- scripts/common.sh@336 -- # read -ra ver2 00:19:09.674 15:09:40 -- scripts/common.sh@337 -- # local 'op=<' 00:19:09.674 15:09:40 -- scripts/common.sh@339 -- # ver1_l=2 00:19:09.674 15:09:40 -- scripts/common.sh@340 -- # ver2_l=1 00:19:09.674 15:09:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:09.674 15:09:40 -- scripts/common.sh@343 -- # case "$op" in 00:19:09.674 15:09:40 -- scripts/common.sh@344 -- # : 1 00:19:09.674 15:09:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:09.675 15:09:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:09.675 15:09:40 -- scripts/common.sh@364 -- # decimal 1 00:19:09.675 15:09:40 -- scripts/common.sh@352 -- # local d=1 00:19:09.675 15:09:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:09.675 15:09:40 -- scripts/common.sh@354 -- # echo 1 00:19:09.675 15:09:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:09.675 15:09:40 -- scripts/common.sh@365 -- # decimal 2 00:19:09.675 15:09:40 -- scripts/common.sh@352 -- # local d=2 00:19:09.675 15:09:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:09.675 15:09:40 -- scripts/common.sh@354 -- # echo 2 00:19:09.675 15:09:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:09.675 15:09:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:09.675 15:09:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:09.675 15:09:40 -- scripts/common.sh@367 -- # return 0 00:19:09.675 15:09:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:09.675 15:09:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:09.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.675 --rc genhtml_branch_coverage=1 00:19:09.675 --rc genhtml_function_coverage=1 00:19:09.675 --rc genhtml_legend=1 00:19:09.675 --rc geninfo_all_blocks=1 00:19:09.675 --rc geninfo_unexecuted_blocks=1 00:19:09.675 00:19:09.675 ' 00:19:09.675 15:09:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:09.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.675 --rc genhtml_branch_coverage=1 00:19:09.675 --rc genhtml_function_coverage=1 00:19:09.675 --rc genhtml_legend=1 00:19:09.675 --rc geninfo_all_blocks=1 00:19:09.675 --rc geninfo_unexecuted_blocks=1 00:19:09.675 00:19:09.675 ' 00:19:09.675 15:09:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:09.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.675 --rc genhtml_branch_coverage=1 00:19:09.675 --rc genhtml_function_coverage=1 00:19:09.675 --rc genhtml_legend=1 00:19:09.675 --rc geninfo_all_blocks=1 00:19:09.675 --rc geninfo_unexecuted_blocks=1 00:19:09.675 00:19:09.675 ' 00:19:09.675 15:09:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:09.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.675 --rc genhtml_branch_coverage=1 00:19:09.675 --rc genhtml_function_coverage=1 00:19:09.675 --rc genhtml_legend=1 00:19:09.675 --rc geninfo_all_blocks=1 00:19:09.675 --rc geninfo_unexecuted_blocks=1 00:19:09.675 00:19:09.675 ' 00:19:09.675 15:09:40 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:09.675 15:09:40 -- nvmf/common.sh@7 -- # uname -s 00:19:09.675 15:09:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:09.675 15:09:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:09.675 15:09:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:09.675 15:09:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:09.675 15:09:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:09.675 15:09:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:09.675 15:09:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:09.675 15:09:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:09.675 15:09:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:09.675 15:09:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:09.675 15:09:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:19:09.675 15:09:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:19:09.675 15:09:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:09.675 15:09:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:09.675 15:09:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:09.675 15:09:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:09.675 15:09:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:09.675 15:09:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:09.675 15:09:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:09.675 15:09:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.675 15:09:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.675 15:09:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.675 15:09:40 -- paths/export.sh@5 -- # export PATH 00:19:09.675 15:09:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.675 15:09:40 -- nvmf/common.sh@46 -- # : 0 00:19:09.675 15:09:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:09.675 15:09:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:09.675 15:09:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:09.675 15:09:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:09.675 15:09:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:09.675 15:09:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:09.675 15:09:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:09.675 15:09:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:09.675 15:09:40 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:09.675 15:09:40 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:09.675 15:09:40 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:09.675 15:09:40 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:09.675 15:09:40 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:09.675 15:09:40 -- host/timeout.sh@19 -- # nvmftestinit 00:19:09.675 15:09:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:09.675 15:09:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:09.675 15:09:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:09.675 15:09:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:09.675 15:09:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:09.675 15:09:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.675 15:09:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:09.675 15:09:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.675 15:09:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:09.675 15:09:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:09.675 15:09:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:09.675 15:09:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:09.675 15:09:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:09.675 15:09:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:09.675 15:09:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:09.675 15:09:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:09.675 15:09:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:09.675 15:09:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:09.675 15:09:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:09.675 15:09:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:09.675 15:09:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:09.675 15:09:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:09.675 15:09:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:09.675 15:09:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:09.675 15:09:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:09.676 15:09:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:09.676 15:09:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:09.676 15:09:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:09.676 Cannot find device "nvmf_tgt_br" 00:19:09.676 15:09:40 -- nvmf/common.sh@154 -- # true 00:19:09.676 15:09:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:09.676 Cannot find device "nvmf_tgt_br2" 00:19:09.676 15:09:40 -- nvmf/common.sh@155 -- # true 00:19:09.676 15:09:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:09.676 15:09:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:09.676 Cannot find device "nvmf_tgt_br" 00:19:09.676 15:09:40 -- nvmf/common.sh@157 -- # true 00:19:09.676 15:09:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:09.676 Cannot find device "nvmf_tgt_br2" 00:19:09.676 15:09:40 -- nvmf/common.sh@158 -- # true 00:19:09.676 15:09:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:09.676 15:09:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:09.936 15:09:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:09.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:09.936 15:09:40 -- nvmf/common.sh@161 -- # true 00:19:09.936 15:09:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:09.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:09.936 15:09:40 -- nvmf/common.sh@162 -- # true 00:19:09.936 15:09:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:09.936 15:09:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:09.936 15:09:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:09.936 15:09:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:09.936 15:09:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:09.936 15:09:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:09.936 15:09:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:09.936 15:09:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:09.936 15:09:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:09.936 15:09:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:09.936 15:09:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:09.936 15:09:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:09.936 15:09:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:09.936 15:09:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:09.936 15:09:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:09.936 15:09:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:09.936 15:09:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:09.936 15:09:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:09.937 15:09:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:09.937 15:09:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:09.937 15:09:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:09.937 15:09:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:09.937 15:09:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:09.937 15:09:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:09.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:09.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:19:09.937 00:19:09.937 --- 10.0.0.2 ping statistics --- 00:19:09.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.937 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:19:09.937 15:09:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:09.937 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:09.937 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:19:09.937 00:19:09.937 --- 10.0.0.3 ping statistics --- 00:19:09.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.937 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:19:09.937 15:09:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:09.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:09.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:09.937 00:19:09.937 --- 10.0.0.1 ping statistics --- 00:19:09.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.937 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:09.937 15:09:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:09.937 15:09:40 -- nvmf/common.sh@421 -- # return 0 00:19:09.937 15:09:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:09.937 15:09:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:09.937 15:09:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:09.937 15:09:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:09.937 15:09:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:09.937 15:09:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:09.937 15:09:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:09.937 15:09:40 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:19:09.937 15:09:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:09.937 15:09:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:09.937 15:09:40 -- common/autotest_common.sh@10 -- # set +x 00:19:09.937 15:09:40 -- nvmf/common.sh@469 -- # nvmfpid=85401 00:19:09.937 15:09:40 -- nvmf/common.sh@470 -- # waitforlisten 85401 00:19:09.937 15:09:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:09.937 15:09:40 -- common/autotest_common.sh@829 -- # '[' -z 85401 ']' 00:19:09.937 15:09:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.937 15:09:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:09.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.937 15:09:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.937 15:09:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:09.937 15:09:40 -- common/autotest_common.sh@10 -- # set +x 00:19:10.196 [2024-11-20 15:09:40.770395] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:10.196 [2024-11-20 15:09:40.770489] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.196 [2024-11-20 15:09:40.906340] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:10.196 [2024-11-20 15:09:40.940605] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:10.196 [2024-11-20 15:09:40.940766] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:10.196 [2024-11-20 15:09:40.940779] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:10.196 [2024-11-20 15:09:40.940788] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:10.196 [2024-11-20 15:09:40.944005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.196 [2024-11-20 15:09:40.944096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.132 15:09:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:11.132 15:09:41 -- common/autotest_common.sh@862 -- # return 0 00:19:11.132 15:09:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:11.132 15:09:41 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:11.132 15:09:41 -- common/autotest_common.sh@10 -- # set +x 00:19:11.132 15:09:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.132 15:09:41 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:11.132 15:09:41 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:11.390 [2024-11-20 15:09:41.987543] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.390 15:09:42 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:11.715 Malloc0 00:19:11.715 15:09:42 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:11.973 15:09:42 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:12.231 15:09:42 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:12.489 [2024-11-20 15:09:43.098983] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:12.489 15:09:43 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:12.489 15:09:43 -- host/timeout.sh@32 -- # bdevperf_pid=85450 00:19:12.489 15:09:43 -- host/timeout.sh@34 -- # waitforlisten 85450 /var/tmp/bdevperf.sock 00:19:12.489 15:09:43 -- common/autotest_common.sh@829 -- # '[' -z 85450 ']' 00:19:12.489 15:09:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:12.489 15:09:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:12.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:12.489 15:09:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:12.489 15:09:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:12.489 15:09:43 -- common/autotest_common.sh@10 -- # set +x 00:19:12.489 [2024-11-20 15:09:43.155540] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:12.489 [2024-11-20 15:09:43.155620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85450 ] 00:19:12.748 [2024-11-20 15:09:43.295252] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.748 [2024-11-20 15:09:43.330121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.748 15:09:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:12.748 15:09:43 -- common/autotest_common.sh@862 -- # return 0 00:19:12.748 15:09:43 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:13.006 15:09:43 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:13.263 NVMe0n1 00:19:13.263 15:09:44 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:13.263 15:09:44 -- host/timeout.sh@51 -- # rpc_pid=85466 00:19:13.263 15:09:44 -- host/timeout.sh@53 -- # sleep 1 00:19:13.521 Running I/O for 10 seconds... 00:19:14.455 15:09:45 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:14.739 [2024-11-20 15:09:45.302338] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5caa60 is same with the state(5) to be set 00:19:14.739 [2024-11-20 15:09:45.302398] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5caa60 is same with the state(5) to be set 00:19:14.739 [2024-11-20 15:09:45.302421] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5caa60 is same with the state(5) to be set 00:19:14.739 [2024-11-20 15:09:45.302432] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5caa60 is same with the state(5) to be set 00:19:14.739 [2024-11-20 15:09:45.302441] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5caa60 is same with the state(5) to be set 00:19:14.739 [2024-11-20 15:09:45.302449] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5caa60 is same with the state(5) to be set 00:19:14.739 [2024-11-20 15:09:45.302458] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5caa60 is same with the state(5) to be set 00:19:14.739 [2024-11-20 15:09:45.302467] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5caa60 is same with the state(5) to be set 00:19:14.739 [2024-11-20 15:09:45.302476] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5caa60 is same with the state(5) to be set 00:19:14.739 [2024-11-20 15:09:45.302484] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5caa60 is same with the state(5) to be set 00:19:14.739 [2024-11-20 15:09:45.302493] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5caa60 is same with the state(5) to be set 00:19:14.739 [2024-11-20 15:09:45.302501] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5caa60 is same with the state(5) to be set 00:19:14.739 [2024-11-20 15:09:45.302510] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5caa60 is same with the state(5) to be set 00:19:14.739 [2024-11-20 15:09:45.302518] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5caa60 is same with the state(5) to be set 00:19:14.739 [2024-11-20 15:09:45.302585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:121520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.739 [2024-11-20 15:09:45.302616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.739 [2024-11-20 15:09:45.302656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:121552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.739 [2024-11-20 15:09:45.302673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.739 [2024-11-20 15:09:45.302686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.739 [2024-11-20 15:09:45.302696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.739 [2024-11-20 15:09:45.302708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.739 [2024-11-20 15:09:45.302718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.739 [2024-11-20 15:09:45.302730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:121592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.739 [2024-11-20 15:09:45.302739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.739 [2024-11-20 15:09:45.302751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.739 [2024-11-20 15:09:45.302761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.739 [2024-11-20 15:09:45.302772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.739 [2024-11-20 15:09:45.302784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.739 [2024-11-20 15:09:45.302795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.739 [2024-11-20 15:09:45.302805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.739 [2024-11-20 15:09:45.302817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.739 [2024-11-20 15:09:45.302827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.739 [2024-11-20 15:09:45.302839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.739 [2024-11-20 15:09:45.302849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.739 [2024-11-20 15:09:45.302866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.739 [2024-11-20 15:09:45.302876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.739 [2024-11-20 15:09:45.302888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.739 [2024-11-20 15:09:45.302897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.739 [2024-11-20 15:09:45.302909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.739 [2024-11-20 15:09:45.302918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.739 [2024-11-20 15:09:45.302929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:121608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.739 [2024-11-20 15:09:45.302939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.739 [2024-11-20 15:09:45.302951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:121624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.739 [2024-11-20 15:09:45.302960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.739 [2024-11-20 15:09:45.302972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:121640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.739 [2024-11-20 15:09:45.302981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.739 [2024-11-20 15:09:45.302993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.739 [2024-11-20 15:09:45.303004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.303016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.740 [2024-11-20 15:09:45.303026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.303037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.740 [2024-11-20 15:09:45.303047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.303059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:121672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.740 [2024-11-20 15:09:45.303068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.303079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.740 [2024-11-20 15:09:45.303089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.303100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.740 [2024-11-20 15:09:45.303110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.303122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.740 [2024-11-20 15:09:45.303131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.303142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.740 [2024-11-20 15:09:45.303151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.303163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.740 [2024-11-20 15:09:45.303173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.303184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.740 [2024-11-20 15:09:45.303193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.303776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.740 [2024-11-20 15:09:45.303809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.303823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:121736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.740 [2024-11-20 15:09:45.303833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.303845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.740 [2024-11-20 15:09:45.303855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.303866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:121008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.740 [2024-11-20 15:09:45.303876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.303888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.740 [2024-11-20 15:09:45.303897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.303909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.740 [2024-11-20 15:09:45.303918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.303930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:121048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.740 [2024-11-20 15:09:45.303940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.303952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.740 [2024-11-20 15:09:45.303962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.303973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.740 [2024-11-20 15:09:45.304077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.304097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.740 [2024-11-20 15:09:45.304108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.304119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:121744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.740 [2024-11-20 15:09:45.304129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.304141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.740 [2024-11-20 15:09:45.304150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.304254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:121760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.740 [2024-11-20 15:09:45.304269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.304282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.740 [2024-11-20 15:09:45.304292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.304304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.740 [2024-11-20 15:09:45.304313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.304325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.740 [2024-11-20 15:09:45.304334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.304747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.740 [2024-11-20 15:09:45.304761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.304773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.740 [2024-11-20 15:09:45.304783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.304794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.740 [2024-11-20 15:09:45.304804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.304815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.740 [2024-11-20 15:09:45.304825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.304836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.740 [2024-11-20 15:09:45.304846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.304857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:121144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.740 [2024-11-20 15:09:45.304867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.740 [2024-11-20 15:09:45.304878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:121152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.740 [2024-11-20 15:09:45.304888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.305137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.741 [2024-11-20 15:09:45.305153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.305165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.741 [2024-11-20 15:09:45.305174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.305192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.741 [2024-11-20 15:09:45.305319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.305338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.741 [2024-11-20 15:09:45.305578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.305597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.741 [2024-11-20 15:09:45.305607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.305619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:121824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.741 [2024-11-20 15:09:45.305629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.305661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.741 [2024-11-20 15:09:45.305681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.305700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:121840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.741 [2024-11-20 15:09:45.305718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.305730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.741 [2024-11-20 15:09:45.305741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.305753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.741 [2024-11-20 15:09:45.305763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.305774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:121864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.741 [2024-11-20 15:09:45.305784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.305795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:121872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.741 [2024-11-20 15:09:45.305805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.305816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.741 [2024-11-20 15:09:45.305826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.305837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.741 [2024-11-20 15:09:45.305847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.305858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:121896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.741 [2024-11-20 15:09:45.305868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.305880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.741 [2024-11-20 15:09:45.305890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.305903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.741 [2024-11-20 15:09:45.305913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.305924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.741 [2024-11-20 15:09:45.305934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.305945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.741 [2024-11-20 15:09:45.305955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.305966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.741 [2024-11-20 15:09:45.305976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.305988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.741 [2024-11-20 15:09:45.305998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.306010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.741 [2024-11-20 15:09:45.306019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.306031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.741 [2024-11-20 15:09:45.306040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.306052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.741 [2024-11-20 15:09:45.306062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.306073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.741 [2024-11-20 15:09:45.306084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.306096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.741 [2024-11-20 15:09:45.306106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.306117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.741 [2024-11-20 15:09:45.306127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.306138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.741 [2024-11-20 15:09:45.306148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.306159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.741 [2024-11-20 15:09:45.306170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.306181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.741 [2024-11-20 15:09:45.306191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.306203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.741 [2024-11-20 15:09:45.306213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.306225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.741 [2024-11-20 15:09:45.306234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.741 [2024-11-20 15:09:45.306246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.742 [2024-11-20 15:09:45.306256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.742 [2024-11-20 15:09:45.306277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.742 [2024-11-20 15:09:45.306298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.742 [2024-11-20 15:09:45.306319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.742 [2024-11-20 15:09:45.306340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:122016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.742 [2024-11-20 15:09:45.306361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:122024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.742 [2024-11-20 15:09:45.306382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.742 [2024-11-20 15:09:45.306403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.742 [2024-11-20 15:09:45.306425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.742 [2024-11-20 15:09:45.306448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:122056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.742 [2024-11-20 15:09:45.306469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.742 [2024-11-20 15:09:45.306491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.742 [2024-11-20 15:09:45.306512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.742 [2024-11-20 15:09:45.306533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.742 [2024-11-20 15:09:45.306554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.742 [2024-11-20 15:09:45.306580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:121440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.742 [2024-11-20 15:09:45.306601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.742 [2024-11-20 15:09:45.306622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.742 [2024-11-20 15:09:45.306660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.742 [2024-11-20 15:09:45.306684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:121512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.742 [2024-11-20 15:09:45.306705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:122080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.742 [2024-11-20 15:09:45.306727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:122088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.742 [2024-11-20 15:09:45.306748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:122096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.742 [2024-11-20 15:09:45.306769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:122104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.742 [2024-11-20 15:09:45.306791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.742 [2024-11-20 15:09:45.306815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:122120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.742 [2024-11-20 15:09:45.306837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:122128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.742 [2024-11-20 15:09:45.306858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.742 [2024-11-20 15:09:45.306887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.742 [2024-11-20 15:09:45.306908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:122152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.742 [2024-11-20 15:09:45.306929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.742 [2024-11-20 15:09:45.306952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.742 [2024-11-20 15:09:45.306964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:122168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.742 [2024-11-20 15:09:45.306974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.743 [2024-11-20 15:09:45.306985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.743 [2024-11-20 15:09:45.306995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.743 [2024-11-20 15:09:45.307007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:122184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.743 [2024-11-20 15:09:45.307017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.743 [2024-11-20 15:09:45.307029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.743 [2024-11-20 15:09:45.307039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.743 [2024-11-20 15:09:45.307050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.743 [2024-11-20 15:09:45.307060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.743 [2024-11-20 15:09:45.307071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:122208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:14.743 [2024-11-20 15:09:45.307081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.743 [2024-11-20 15:09:45.307092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.743 [2024-11-20 15:09:45.307102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.743 [2024-11-20 15:09:45.307113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.743 [2024-11-20 15:09:45.307122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.743 [2024-11-20 15:09:45.307134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.743 [2024-11-20 15:09:45.307144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.743 [2024-11-20 15:09:45.307158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.743 [2024-11-20 15:09:45.307168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.743 [2024-11-20 15:09:45.307180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.743 [2024-11-20 15:09:45.307189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.743 [2024-11-20 15:09:45.307201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.743 [2024-11-20 15:09:45.307226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.743 [2024-11-20 15:09:45.307238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.743 [2024-11-20 15:09:45.307248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.743 [2024-11-20 15:09:45.307259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:14.743 [2024-11-20 15:09:45.307269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.743 [2024-11-20 15:09:45.307280] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135b9a0 is same with the state(5) to be set 00:19:14.743 [2024-11-20 15:09:45.307294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:14.743 [2024-11-20 15:09:45.307305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:14.743 [2024-11-20 15:09:45.307314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121632 len:8 PRP1 0x0 PRP2 0x0 00:19:14.743 [2024-11-20 15:09:45.307323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:14.743 [2024-11-20 15:09:45.307369] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x135b9a0 was disconnected and freed. reset controller. 00:19:14.743 [2024-11-20 15:09:45.307656] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:14.743 [2024-11-20 15:09:45.307742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1360610 (9): Bad file descriptor 00:19:14.743 [2024-11-20 15:09:45.307846] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:14.743 [2024-11-20 15:09:45.307910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:14.743 [2024-11-20 15:09:45.307953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:14.743 [2024-11-20 15:09:45.307975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1360610 with addr=10.0.0.2, port=4420 00:19:14.743 [2024-11-20 15:09:45.307986] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360610 is same with the state(5) to be set 00:19:14.743 [2024-11-20 15:09:45.308005] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1360610 (9): Bad file descriptor 00:19:14.743 [2024-11-20 15:09:45.308023] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:14.743 [2024-11-20 15:09:45.308032] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:14.743 [2024-11-20 15:09:45.308043] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:14.743 [2024-11-20 15:09:45.308063] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:14.743 [2024-11-20 15:09:45.308074] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:14.743 15:09:45 -- host/timeout.sh@56 -- # sleep 2 00:19:16.647 [2024-11-20 15:09:47.308228] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:16.647 [2024-11-20 15:09:47.308322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:16.647 [2024-11-20 15:09:47.308366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:16.647 [2024-11-20 15:09:47.308384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1360610 with addr=10.0.0.2, port=4420 00:19:16.647 [2024-11-20 15:09:47.308397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360610 is same with the state(5) to be set 00:19:16.647 [2024-11-20 15:09:47.308424] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1360610 (9): Bad file descriptor 00:19:16.647 [2024-11-20 15:09:47.308444] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:16.647 [2024-11-20 15:09:47.308454] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:16.647 [2024-11-20 15:09:47.308465] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:16.647 [2024-11-20 15:09:47.308492] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:16.647 [2024-11-20 15:09:47.308504] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:16.647 15:09:47 -- host/timeout.sh@57 -- # get_controller 00:19:16.647 15:09:47 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:16.647 15:09:47 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:16.906 15:09:47 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:16.906 15:09:47 -- host/timeout.sh@58 -- # get_bdev 00:19:16.906 15:09:47 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:16.906 15:09:47 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:17.166 15:09:47 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:17.166 15:09:47 -- host/timeout.sh@61 -- # sleep 5 00:19:18.542 [2024-11-20 15:09:49.308684] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.542 [2024-11-20 15:09:49.308790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.542 [2024-11-20 15:09:49.308836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:18.542 [2024-11-20 15:09:49.308854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1360610 with addr=10.0.0.2, port=4420 00:19:18.542 [2024-11-20 15:09:49.308868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360610 is same with the state(5) to be set 00:19:18.542 [2024-11-20 15:09:49.308896] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1360610 (9): Bad file descriptor 00:19:18.542 [2024-11-20 15:09:49.308916] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:18.542 [2024-11-20 15:09:49.308926] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:18.542 [2024-11-20 15:09:49.308938] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:18.542 [2024-11-20 15:09:49.308966] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:18.542 [2024-11-20 15:09:49.308979] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:21.070 [2024-11-20 15:09:51.309016] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:21.070 [2024-11-20 15:09:51.309091] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:21.070 [2024-11-20 15:09:51.309105] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:21.070 [2024-11-20 15:09:51.309116] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:21.070 [2024-11-20 15:09:51.309146] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:21.637 00:19:21.637 Latency(us) 00:19:21.637 [2024-11-20T15:09:52.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.637 [2024-11-20T15:09:52.441Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:21.637 Verification LBA range: start 0x0 length 0x4000 00:19:21.637 NVMe0n1 : 8.18 1853.12 7.24 15.66 0.00 68383.40 3232.12 7015926.69 00:19:21.637 [2024-11-20T15:09:52.441Z] =================================================================================================================== 00:19:21.637 [2024-11-20T15:09:52.441Z] Total : 1853.12 7.24 15.66 0.00 68383.40 3232.12 7015926.69 00:19:21.637 0 00:19:22.204 15:09:52 -- host/timeout.sh@62 -- # get_controller 00:19:22.204 15:09:52 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:22.204 15:09:52 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:22.464 15:09:53 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:22.464 15:09:53 -- host/timeout.sh@63 -- # get_bdev 00:19:22.464 15:09:53 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:22.464 15:09:53 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:22.723 15:09:53 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:22.723 15:09:53 -- host/timeout.sh@65 -- # wait 85466 00:19:22.723 15:09:53 -- host/timeout.sh@67 -- # killprocess 85450 00:19:22.723 15:09:53 -- common/autotest_common.sh@936 -- # '[' -z 85450 ']' 00:19:22.723 15:09:53 -- common/autotest_common.sh@940 -- # kill -0 85450 00:19:22.723 15:09:53 -- common/autotest_common.sh@941 -- # uname 00:19:22.723 15:09:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:22.723 15:09:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85450 00:19:22.982 15:09:53 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:22.982 15:09:53 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:22.982 killing process with pid 85450 00:19:22.982 15:09:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85450' 00:19:22.982 15:09:53 -- common/autotest_common.sh@955 -- # kill 85450 00:19:22.982 Received shutdown signal, test time was about 9.406513 seconds 00:19:22.982 00:19:22.982 Latency(us) 00:19:22.982 [2024-11-20T15:09:53.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.982 [2024-11-20T15:09:53.786Z] =================================================================================================================== 00:19:22.982 [2024-11-20T15:09:53.786Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:22.982 15:09:53 -- common/autotest_common.sh@960 -- # wait 85450 00:19:22.982 15:09:53 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:23.241 [2024-11-20 15:09:53.898776] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.241 15:09:53 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:23.241 15:09:53 -- host/timeout.sh@74 -- # bdevperf_pid=85589 00:19:23.241 15:09:53 -- host/timeout.sh@76 -- # waitforlisten 85589 /var/tmp/bdevperf.sock 00:19:23.241 15:09:53 -- common/autotest_common.sh@829 -- # '[' -z 85589 ']' 00:19:23.241 15:09:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:23.241 15:09:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:23.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:23.241 15:09:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:23.241 15:09:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:23.241 15:09:53 -- common/autotest_common.sh@10 -- # set +x 00:19:23.241 [2024-11-20 15:09:53.953931] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:23.241 [2024-11-20 15:09:53.954013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85589 ] 00:19:23.499 [2024-11-20 15:09:54.089436] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.499 [2024-11-20 15:09:54.129113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.499 15:09:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:23.499 15:09:54 -- common/autotest_common.sh@862 -- # return 0 00:19:23.499 15:09:54 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:23.758 15:09:54 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:24.342 NVMe0n1 00:19:24.342 15:09:54 -- host/timeout.sh@84 -- # rpc_pid=85605 00:19:24.342 15:09:54 -- host/timeout.sh@86 -- # sleep 1 00:19:24.342 15:09:54 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:24.342 Running I/O for 10 seconds... 00:19:25.311 15:09:55 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:25.311 [2024-11-20 15:09:56.072655] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.072933] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.073094] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.073227] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.073350] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.073456] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.073568] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.073714] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.073923] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074084] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074098] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074108] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074116] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074125] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074134] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074143] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074151] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074160] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074168] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074177] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074185] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074194] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074213] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074221] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074230] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074238] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074247] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074255] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074264] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074273] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074281] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074290] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074298] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074307] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074315] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ca1b0 is same with the state(5) to be set 00:19:25.311 [2024-11-20 15:09:56.074388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.311 [2024-11-20 15:09:56.074420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.311 [2024-11-20 15:09:56.074447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:114528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.311 [2024-11-20 15:09:56.074459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.311 [2024-11-20 15:09:56.074472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.311 [2024-11-20 15:09:56.074482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.311 [2024-11-20 15:09:56.074494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:114552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.311 [2024-11-20 15:09:56.074503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.311 [2024-11-20 15:09:56.074515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.311 [2024-11-20 15:09:56.074524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.311 [2024-11-20 15:09:56.074536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.312 [2024-11-20 15:09:56.074545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.074557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.312 [2024-11-20 15:09:56.074566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.074577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:115200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.312 [2024-11-20 15:09:56.074587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.074598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:115216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.312 [2024-11-20 15:09:56.074608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.074620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:115240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.312 [2024-11-20 15:09:56.074629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.074655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.312 [2024-11-20 15:09:56.074668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.074680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.312 [2024-11-20 15:09:56.074689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.074701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:114568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.312 [2024-11-20 15:09:56.074710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.074722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:114576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.312 [2024-11-20 15:09:56.074733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.074745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:114600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.312 [2024-11-20 15:09:56.074754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.074765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:114608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.312 [2024-11-20 15:09:56.074775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.074786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:114624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.312 [2024-11-20 15:09:56.074798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.074810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:114632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.312 [2024-11-20 15:09:56.074823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.074834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:114640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.312 [2024-11-20 15:09:56.074843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.074854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:114648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.312 [2024-11-20 15:09:56.074863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.074875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.312 [2024-11-20 15:09:56.074884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.074895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:115280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.312 [2024-11-20 15:09:56.074904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.074916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:115288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.312 [2024-11-20 15:09:56.074925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.074936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:114656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.312 [2024-11-20 15:09:56.074945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.074956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.312 [2024-11-20 15:09:56.074966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.074978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.312 [2024-11-20 15:09:56.074988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.074999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:114704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.312 [2024-11-20 15:09:56.075008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.075023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.312 [2024-11-20 15:09:56.075040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.075056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.312 [2024-11-20 15:09:56.075072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.075089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:114752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.312 [2024-11-20 15:09:56.075099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.075715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:114760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.312 [2024-11-20 15:09:56.075731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.075743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:115296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.312 [2024-11-20 15:09:56.075752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.075764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:115304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.312 [2024-11-20 15:09:56.075773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.075785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:115312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.312 [2024-11-20 15:09:56.075795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.075811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:115320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.312 [2024-11-20 15:09:56.076174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.076191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:115328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.312 [2024-11-20 15:09:56.076202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.076214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.312 [2024-11-20 15:09:56.076224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.312 [2024-11-20 15:09:56.076236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:115344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.313 [2024-11-20 15:09:56.076245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:115352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.313 [2024-11-20 15:09:56.076266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:115360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.313 [2024-11-20 15:09:56.076287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.313 [2024-11-20 15:09:56.076308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:115376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.313 [2024-11-20 15:09:56.076329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.313 [2024-11-20 15:09:56.076349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:115392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.313 [2024-11-20 15:09:56.076370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:115400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.313 [2024-11-20 15:09:56.076391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:115408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.313 [2024-11-20 15:09:56.076412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:114776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.313 [2024-11-20 15:09:56.076433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:114792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.313 [2024-11-20 15:09:56.076456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.313 [2024-11-20 15:09:56.076477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.313 [2024-11-20 15:09:56.076498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:114856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.313 [2024-11-20 15:09:56.076519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.313 [2024-11-20 15:09:56.076540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.313 [2024-11-20 15:09:56.076561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.313 [2024-11-20 15:09:56.076582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:115416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.313 [2024-11-20 15:09:56.076603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:115424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.313 [2024-11-20 15:09:56.076624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:115432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.313 [2024-11-20 15:09:56.076660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:115440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.313 [2024-11-20 15:09:56.076682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.313 [2024-11-20 15:09:56.076703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:115456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.313 [2024-11-20 15:09:56.076725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:115464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.313 [2024-11-20 15:09:56.076747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:115472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.313 [2024-11-20 15:09:56.076767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:115480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.313 [2024-11-20 15:09:56.076788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:115488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.313 [2024-11-20 15:09:56.076809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.313 [2024-11-20 15:09:56.076830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:115504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.313 [2024-11-20 15:09:56.076851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:115512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.313 [2024-11-20 15:09:56.076873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:115520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.313 [2024-11-20 15:09:56.076893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.313 [2024-11-20 15:09:56.076914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.313 [2024-11-20 15:09:56.076926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:115536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.314 [2024-11-20 15:09:56.076936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.076947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:115544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.314 [2024-11-20 15:09:56.076956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.076968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.314 [2024-11-20 15:09:56.076977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.076990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:114936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.314 [2024-11-20 15:09:56.076999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:114960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.314 [2024-11-20 15:09:56.077020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.314 [2024-11-20 15:09:56.077041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:114992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.314 [2024-11-20 15:09:56.077062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.314 [2024-11-20 15:09:56.077083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.314 [2024-11-20 15:09:56.077104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.314 [2024-11-20 15:09:56.077125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:115552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.314 [2024-11-20 15:09:56.077146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.314 [2024-11-20 15:09:56.077172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.314 [2024-11-20 15:09:56.077193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:115576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.314 [2024-11-20 15:09:56.077215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:115584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.314 [2024-11-20 15:09:56.077235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:115592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.314 [2024-11-20 15:09:56.077257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:115600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.314 [2024-11-20 15:09:56.077279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:115608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.314 [2024-11-20 15:09:56.077300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:115616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.314 [2024-11-20 15:09:56.077321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:115624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.314 [2024-11-20 15:09:56.077342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:115632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.314 [2024-11-20 15:09:56.077363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:115640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.314 [2024-11-20 15:09:56.077383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:115648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.314 [2024-11-20 15:09:56.077404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:115656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.314 [2024-11-20 15:09:56.077425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.314 [2024-11-20 15:09:56.077447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:115672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.314 [2024-11-20 15:09:56.077470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:115680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.314 [2024-11-20 15:09:56.077491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.314 [2024-11-20 15:09:56.077513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.314 [2024-11-20 15:09:56.077535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.314 [2024-11-20 15:09:56.077556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:115120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.314 [2024-11-20 15:09:56.077577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.314 [2024-11-20 15:09:56.077598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.314 [2024-11-20 15:09:56.077609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:115176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.314 [2024-11-20 15:09:56.077619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.077630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:115184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.315 [2024-11-20 15:09:56.077652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.077666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:115208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.315 [2024-11-20 15:09:56.077675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.077687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:115688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.315 [2024-11-20 15:09:56.077696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.077708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:115696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.315 [2024-11-20 15:09:56.077718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.077729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:115704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.315 [2024-11-20 15:09:56.077738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.077750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:115712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.315 [2024-11-20 15:09:56.077760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.077771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:115720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.315 [2024-11-20 15:09:56.077781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.077792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:115728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.315 [2024-11-20 15:09:56.077802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.077816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:115736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.315 [2024-11-20 15:09:56.077826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.077838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:115744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.315 [2024-11-20 15:09:56.077847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.077859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.315 [2024-11-20 15:09:56.077869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.077880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:115760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.315 [2024-11-20 15:09:56.077890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.077901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:115768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.315 [2024-11-20 15:09:56.077910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.077922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:115776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.315 [2024-11-20 15:09:56.077931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.077943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:115784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.315 [2024-11-20 15:09:56.077952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.077964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.315 [2024-11-20 15:09:56.077973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.077985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:115800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.315 [2024-11-20 15:09:56.077994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.078006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:115808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.315 [2024-11-20 15:09:56.078015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.078026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:115816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.315 [2024-11-20 15:09:56.078035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.078047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:115824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.315 [2024-11-20 15:09:56.078056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.078069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:115832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.315 [2024-11-20 15:09:56.078079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.078090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:115840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.315 [2024-11-20 15:09:56.078100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.078111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:115848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:25.315 [2024-11-20 15:09:56.078120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.078132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:115224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.315 [2024-11-20 15:09:56.078142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.078155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:115232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:25.315 [2024-11-20 15:09:56.078165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.078176] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7870 is same with the state(5) to be set 00:19:25.315 [2024-11-20 15:09:56.078192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:25.315 [2024-11-20 15:09:56.078200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:25.315 [2024-11-20 15:09:56.078209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115264 len:8 PRP1 0x0 PRP2 0x0 00:19:25.315 [2024-11-20 15:09:56.078218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:25.315 [2024-11-20 15:09:56.078271] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18f7870 was disconnected and freed. reset controller. 00:19:25.315 [2024-11-20 15:09:56.078552] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:25.315 [2024-11-20 15:09:56.079604] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fc450 (9): Bad file descriptor 00:19:25.315 [2024-11-20 15:09:56.080156] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:25.315 [2024-11-20 15:09:56.080449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:25.315 [2024-11-20 15:09:56.080733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:25.315 [2024-11-20 15:09:56.080961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fc450 with addr=10.0.0.2, port=4420 00:19:25.315 [2024-11-20 15:09:56.081356] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc450 is same with the state(5) to be set 00:19:25.315 [2024-11-20 15:09:56.081769] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fc450 (9): Bad file descriptor 00:19:25.315 [2024-11-20 15:09:56.082181] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:25.315 [2024-11-20 15:09:56.082573] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:25.315 [2024-11-20 15:09:56.082962] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:25.315 [2024-11-20 15:09:56.083190] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:25.316 [2024-11-20 15:09:56.083434] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:25.316 15:09:56 -- host/timeout.sh@90 -- # sleep 1 00:19:26.691 [2024-11-20 15:09:57.084015] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:26.691 [2024-11-20 15:09:57.084544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:26.691 [2024-11-20 15:09:57.084603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:26.691 [2024-11-20 15:09:57.084622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fc450 with addr=10.0.0.2, port=4420 00:19:26.691 [2024-11-20 15:09:57.084636] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc450 is same with the state(5) to be set 00:19:26.691 [2024-11-20 15:09:57.084723] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fc450 (9): Bad file descriptor 00:19:26.691 [2024-11-20 15:09:57.084752] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:26.691 [2024-11-20 15:09:57.084762] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:26.691 [2024-11-20 15:09:57.084773] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:26.691 [2024-11-20 15:09:57.084815] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:26.691 [2024-11-20 15:09:57.084833] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:26.691 15:09:57 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:26.691 [2024-11-20 15:09:57.333659] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.691 15:09:57 -- host/timeout.sh@92 -- # wait 85605 00:19:27.626 [2024-11-20 15:09:58.096497] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:34.187 00:19:34.187 Latency(us) 00:19:34.187 [2024-11-20T15:10:04.991Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.187 [2024-11-20T15:10:04.991Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:34.187 Verification LBA range: start 0x0 length 0x4000 00:19:34.187 NVMe0n1 : 10.01 9091.91 35.52 0.00 0.00 14050.22 1005.38 3019898.88 00:19:34.187 [2024-11-20T15:10:04.991Z] =================================================================================================================== 00:19:34.187 [2024-11-20T15:10:04.991Z] Total : 9091.91 35.52 0.00 0.00 14050.22 1005.38 3019898.88 00:19:34.187 0 00:19:34.445 15:10:04 -- host/timeout.sh@97 -- # rpc_pid=85710 00:19:34.445 15:10:04 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:34.445 15:10:04 -- host/timeout.sh@98 -- # sleep 1 00:19:34.445 Running I/O for 10 seconds... 00:19:35.473 15:10:06 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:35.733 [2024-11-20 15:10:06.282360] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.733 [2024-11-20 15:10:06.282417] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.733 [2024-11-20 15:10:06.282431] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.733 [2024-11-20 15:10:06.282440] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.733 [2024-11-20 15:10:06.282448] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.733 [2024-11-20 15:10:06.282457] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.733 [2024-11-20 15:10:06.282465] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.733 [2024-11-20 15:10:06.282474] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.733 [2024-11-20 15:10:06.282483] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.733 [2024-11-20 15:10:06.282491] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.733 [2024-11-20 15:10:06.282500] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.733 [2024-11-20 15:10:06.282509] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.733 [2024-11-20 15:10:06.282517] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.734 [2024-11-20 15:10:06.282526] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.734 [2024-11-20 15:10:06.282535] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.734 [2024-11-20 15:10:06.282543] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.734 [2024-11-20 15:10:06.282552] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.734 [2024-11-20 15:10:06.282560] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.734 [2024-11-20 15:10:06.282569] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.734 [2024-11-20 15:10:06.282577] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.734 [2024-11-20 15:10:06.282585] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.734 [2024-11-20 15:10:06.282594] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.734 [2024-11-20 15:10:06.282602] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.734 [2024-11-20 15:10:06.282611] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.734 [2024-11-20 15:10:06.282619] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.734 [2024-11-20 15:10:06.282628] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.734 [2024-11-20 15:10:06.282656] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.734 [2024-11-20 15:10:06.282669] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c7d80 is same with the state(5) to be set 00:19:35.734 [2024-11-20 15:10:06.282737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:118744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.734 [2024-11-20 15:10:06.282770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.734 [2024-11-20 15:10:06.282793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:118752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.734 [2024-11-20 15:10:06.282804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.734 [2024-11-20 15:10:06.282816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:118760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.734 [2024-11-20 15:10:06.282826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.734 [2024-11-20 15:10:06.282837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:118768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.734 [2024-11-20 15:10:06.282846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.734 [2024-11-20 15:10:06.282865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:118072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.734 [2024-11-20 15:10:06.282875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.734 [2024-11-20 15:10:06.282886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:118088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.734 [2024-11-20 15:10:06.282895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.734 [2024-11-20 15:10:06.282907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:118128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.734 [2024-11-20 15:10:06.282916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.734 [2024-11-20 15:10:06.282927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:118136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.734 [2024-11-20 15:10:06.282936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.734 [2024-11-20 15:10:06.282947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:118144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.734 [2024-11-20 15:10:06.282956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.734 [2024-11-20 15:10:06.282970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.734 [2024-11-20 15:10:06.282986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.734 [2024-11-20 15:10:06.283348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.734 [2024-11-20 15:10:06.283583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.734 [2024-11-20 15:10:06.283609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:118184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.734 [2024-11-20 15:10:06.283620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.734 [2024-11-20 15:10:06.283633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:118776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.734 [2024-11-20 15:10:06.283745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.734 [2024-11-20 15:10:06.283766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:118784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.734 [2024-11-20 15:10:06.283783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.734 [2024-11-20 15:10:06.283801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:118792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.734 [2024-11-20 15:10:06.283901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.734 [2024-11-20 15:10:06.283920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.734 [2024-11-20 15:10:06.283930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.734 [2024-11-20 15:10:06.283941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.734 [2024-11-20 15:10:06.283951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.734 [2024-11-20 15:10:06.283963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:118216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.734 [2024-11-20 15:10:06.283973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.734 [2024-11-20 15:10:06.283984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:118240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.734 [2024-11-20 15:10:06.283993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.734 [2024-11-20 15:10:06.284005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:118248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.734 [2024-11-20 15:10:06.284014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.734 [2024-11-20 15:10:06.284025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:118256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.734 [2024-11-20 15:10:06.284035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.734 [2024-11-20 15:10:06.284046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:118264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.734 [2024-11-20 15:10:06.284055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.734 [2024-11-20 15:10:06.284066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.734 [2024-11-20 15:10:06.284075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.734 [2024-11-20 15:10:06.284089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.734 [2024-11-20 15:10:06.284098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.734 [2024-11-20 15:10:06.284110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:118840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.734 [2024-11-20 15:10:06.284119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.734 [2024-11-20 15:10:06.284130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:118848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.735 [2024-11-20 15:10:06.284140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:118856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.735 [2024-11-20 15:10:06.284160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:118864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.735 [2024-11-20 15:10:06.284181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:118872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.735 [2024-11-20 15:10:06.284202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:118880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.735 [2024-11-20 15:10:06.284222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.735 [2024-11-20 15:10:06.284243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.735 [2024-11-20 15:10:06.284263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:118904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.735 [2024-11-20 15:10:06.284285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.735 [2024-11-20 15:10:06.284306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:118920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.735 [2024-11-20 15:10:06.284326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:118288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.735 [2024-11-20 15:10:06.284348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:118304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.735 [2024-11-20 15:10:06.284369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:118312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.735 [2024-11-20 15:10:06.284390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:118328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.735 [2024-11-20 15:10:06.284410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.735 [2024-11-20 15:10:06.284431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:118368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.735 [2024-11-20 15:10:06.284451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:118376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.735 [2024-11-20 15:10:06.284472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:118384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.735 [2024-11-20 15:10:06.284492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:118928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.735 [2024-11-20 15:10:06.284512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.735 [2024-11-20 15:10:06.284533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:118944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.735 [2024-11-20 15:10:06.284554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:118952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.735 [2024-11-20 15:10:06.284574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:118960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.735 [2024-11-20 15:10:06.284594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:118968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.735 [2024-11-20 15:10:06.284617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.735 [2024-11-20 15:10:06.284650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:118984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.735 [2024-11-20 15:10:06.284674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.735 [2024-11-20 15:10:06.284695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:119000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.735 [2024-11-20 15:10:06.284715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:119008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.735 [2024-11-20 15:10:06.284736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:119016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.735 [2024-11-20 15:10:06.284756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:119024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.735 [2024-11-20 15:10:06.284777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:119032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.735 [2024-11-20 15:10:06.284797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.735 [2024-11-20 15:10:06.284808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:118400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.735 [2024-11-20 15:10:06.284817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.284828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.736 [2024-11-20 15:10:06.284837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.284851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:118416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.736 [2024-11-20 15:10:06.284860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.284872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.736 [2024-11-20 15:10:06.284881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.284892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:118432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.736 [2024-11-20 15:10:06.284901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.284912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:118440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.736 [2024-11-20 15:10:06.284921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.284932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:118448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.736 [2024-11-20 15:10:06.284941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.284954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.736 [2024-11-20 15:10:06.284969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.284987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:119040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.736 [2024-11-20 15:10:06.285447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.285463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:119048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.736 [2024-11-20 15:10:06.285473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.285486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:119056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.736 [2024-11-20 15:10:06.285495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.285507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:118488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.736 [2024-11-20 15:10:06.285516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.285527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:118496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.736 [2024-11-20 15:10:06.285537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.285548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:118504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.736 [2024-11-20 15:10:06.285557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.285569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:118520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.736 [2024-11-20 15:10:06.285578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.285590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:118528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.736 [2024-11-20 15:10:06.285599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.285610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.736 [2024-11-20 15:10:06.285619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.285632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:118568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.736 [2024-11-20 15:10:06.285657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.285671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:118584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.736 [2024-11-20 15:10:06.285680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.285692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:119064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.736 [2024-11-20 15:10:06.285701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.285712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.736 [2024-11-20 15:10:06.285721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.285734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:119080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.736 [2024-11-20 15:10:06.285744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.285755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:119088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.736 [2024-11-20 15:10:06.285764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.285776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:119096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.736 [2024-11-20 15:10:06.285786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.285797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.736 [2024-11-20 15:10:06.285806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.285818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:119112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.736 [2024-11-20 15:10:06.285828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.285839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:119120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.736 [2024-11-20 15:10:06.285848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.285860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:119128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.736 [2024-11-20 15:10:06.285869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.285880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:119136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.736 [2024-11-20 15:10:06.285890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.285901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:119144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.736 [2024-11-20 15:10:06.285911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.285922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:119152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.736 [2024-11-20 15:10:06.285931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.285942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:119160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.736 [2024-11-20 15:10:06.285952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.285963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:119168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.736 [2024-11-20 15:10:06.285972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.736 [2024-11-20 15:10:06.285984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:119176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.737 [2024-11-20 15:10:06.285998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.737 [2024-11-20 15:10:06.286307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.737 [2024-11-20 15:10:06.286331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:119200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.737 [2024-11-20 15:10:06.286353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:119208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.737 [2024-11-20 15:10:06.286374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:119216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.737 [2024-11-20 15:10:06.286394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:119224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.737 [2024-11-20 15:10:06.286420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:119232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.737 [2024-11-20 15:10:06.286441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:119240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.737 [2024-11-20 15:10:06.286461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:119248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.737 [2024-11-20 15:10:06.286483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:119256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.737 [2024-11-20 15:10:06.286503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.737 [2024-11-20 15:10:06.286523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:118632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.737 [2024-11-20 15:10:06.286545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:118640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.737 [2024-11-20 15:10:06.286565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:118648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.737 [2024-11-20 15:10:06.286587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:118672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.737 [2024-11-20 15:10:06.286608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.737 [2024-11-20 15:10:06.286629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:118696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.737 [2024-11-20 15:10:06.286665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:118720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.737 [2024-11-20 15:10:06.286686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:119264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.737 [2024-11-20 15:10:06.286706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:119272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.737 [2024-11-20 15:10:06.286727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:119280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.737 [2024-11-20 15:10:06.286748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:119288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.737 [2024-11-20 15:10:06.286771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:119296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.737 [2024-11-20 15:10:06.286792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:119304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.737 [2024-11-20 15:10:06.286814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:119312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.737 [2024-11-20 15:10:06.286834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:119320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.737 [2024-11-20 15:10:06.286854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.286866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:119328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.737 [2024-11-20 15:10:06.286880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.287158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:119336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.737 [2024-11-20 15:10:06.287184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.287199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:119344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:35.737 [2024-11-20 15:10:06.287220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.287234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:119352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.737 [2024-11-20 15:10:06.287243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.287255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.737 [2024-11-20 15:10:06.287264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.737 [2024-11-20 15:10:06.287276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.737 [2024-11-20 15:10:06.287285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.738 [2024-11-20 15:10:06.287296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:118736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.738 [2024-11-20 15:10:06.287305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.738 [2024-11-20 15:10:06.287316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:118800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.738 [2024-11-20 15:10:06.287326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.738 [2024-11-20 15:10:06.287337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:118808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.738 [2024-11-20 15:10:06.287347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.738 [2024-11-20 15:10:06.287358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:118816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:35.738 [2024-11-20 15:10:06.287367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.738 [2024-11-20 15:10:06.287378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19af710 is same with the state(5) to be set 00:19:35.738 [2024-11-20 15:10:06.287392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:35.738 [2024-11-20 15:10:06.287404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:35.738 [2024-11-20 15:10:06.287413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118824 len:8 PRP1 0x0 PRP2 0x0 00:19:35.738 [2024-11-20 15:10:06.287422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.738 [2024-11-20 15:10:06.287468] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19af710 was disconnected and freed. reset controller. 00:19:35.738 [2024-11-20 15:10:06.287553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.738 [2024-11-20 15:10:06.287569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.738 [2024-11-20 15:10:06.287582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.738 [2024-11-20 15:10:06.287592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.738 [2024-11-20 15:10:06.287603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.738 [2024-11-20 15:10:06.287613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.738 [2024-11-20 15:10:06.287622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.738 [2024-11-20 15:10:06.287631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.738 [2024-11-20 15:10:06.287656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc450 is same with the state(5) to be set 00:19:35.738 [2024-11-20 15:10:06.288053] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:35.738 [2024-11-20 15:10:06.288090] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fc450 (9): Bad file descriptor 00:19:35.738 [2024-11-20 15:10:06.288192] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.738 [2024-11-20 15:10:06.288246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.738 [2024-11-20 15:10:06.288288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.738 [2024-11-20 15:10:06.288304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fc450 with addr=10.0.0.2, port=4420 00:19:35.738 [2024-11-20 15:10:06.288315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc450 is same with the state(5) to be set 00:19:35.738 [2024-11-20 15:10:06.288334] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fc450 (9): Bad file descriptor 00:19:35.738 [2024-11-20 15:10:06.288350] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:35.738 [2024-11-20 15:10:06.288359] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:35.738 [2024-11-20 15:10:06.288370] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:35.738 [2024-11-20 15:10:06.288391] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.738 [2024-11-20 15:10:06.288403] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:35.738 15:10:06 -- host/timeout.sh@101 -- # sleep 3 00:19:36.673 [2024-11-20 15:10:07.288545] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:36.673 [2024-11-20 15:10:07.288677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:36.673 [2024-11-20 15:10:07.288726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:36.673 [2024-11-20 15:10:07.288744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fc450 with addr=10.0.0.2, port=4420 00:19:36.673 [2024-11-20 15:10:07.288758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc450 is same with the state(5) to be set 00:19:36.673 [2024-11-20 15:10:07.288786] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fc450 (9): Bad file descriptor 00:19:36.673 [2024-11-20 15:10:07.288806] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:36.673 [2024-11-20 15:10:07.288816] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:36.673 [2024-11-20 15:10:07.288832] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:36.673 [2024-11-20 15:10:07.288860] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:36.673 [2024-11-20 15:10:07.288873] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:37.607 [2024-11-20 15:10:08.289022] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:37.607 [2024-11-20 15:10:08.289517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:37.607 [2024-11-20 15:10:08.289572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:37.607 [2024-11-20 15:10:08.289591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fc450 with addr=10.0.0.2, port=4420 00:19:37.607 [2024-11-20 15:10:08.289605] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc450 is same with the state(5) to be set 00:19:37.607 [2024-11-20 15:10:08.289658] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fc450 (9): Bad file descriptor 00:19:37.607 [2024-11-20 15:10:08.289684] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:37.607 [2024-11-20 15:10:08.289705] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:37.607 [2024-11-20 15:10:08.289716] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:37.607 [2024-11-20 15:10:08.289744] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:37.607 [2024-11-20 15:10:08.289757] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:38.543 [2024-11-20 15:10:09.291317] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.543 [2024-11-20 15:10:09.291426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.543 [2024-11-20 15:10:09.291470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:38.543 [2024-11-20 15:10:09.291488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18fc450 with addr=10.0.0.2, port=4420 00:19:38.543 [2024-11-20 15:10:09.291502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fc450 is same with the state(5) to be set 00:19:38.543 [2024-11-20 15:10:09.291698] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18fc450 (9): Bad file descriptor 00:19:38.543 [2024-11-20 15:10:09.291802] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:38.543 [2024-11-20 15:10:09.291814] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:38.543 [2024-11-20 15:10:09.291825] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:38.543 [2024-11-20 15:10:09.294539] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:38.543 [2024-11-20 15:10:09.294830] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:38.543 15:10:09 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:38.801 [2024-11-20 15:10:09.578666] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:38.801 15:10:09 -- host/timeout.sh@103 -- # wait 85710 00:19:39.737 [2024-11-20 15:10:10.326005] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:45.023 00:19:45.023 Latency(us) 00:19:45.023 [2024-11-20T15:10:15.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.023 [2024-11-20T15:10:15.827Z] Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:45.023 Verification LBA range: start 0x0 length 0x4000 00:19:45.023 NVMe0n1 : 10.01 7719.91 30.16 5602.73 0.00 9588.71 670.25 3019898.88 00:19:45.023 [2024-11-20T15:10:15.827Z] =================================================================================================================== 00:19:45.023 [2024-11-20T15:10:15.827Z] Total : 7719.91 30.16 5602.73 0.00 9588.71 0.00 3019898.88 00:19:45.023 0 00:19:45.023 15:10:15 -- host/timeout.sh@105 -- # killprocess 85589 00:19:45.023 15:10:15 -- common/autotest_common.sh@936 -- # '[' -z 85589 ']' 00:19:45.023 15:10:15 -- common/autotest_common.sh@940 -- # kill -0 85589 00:19:45.023 15:10:15 -- common/autotest_common.sh@941 -- # uname 00:19:45.023 15:10:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:45.023 15:10:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85589 00:19:45.023 killing process with pid 85589 00:19:45.023 Received shutdown signal, test time was about 10.000000 seconds 00:19:45.023 00:19:45.023 Latency(us) 00:19:45.023 [2024-11-20T15:10:15.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.023 [2024-11-20T15:10:15.827Z] =================================================================================================================== 00:19:45.023 [2024-11-20T15:10:15.827Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:45.023 15:10:15 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:45.023 15:10:15 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:45.023 15:10:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85589' 00:19:45.023 15:10:15 -- common/autotest_common.sh@955 -- # kill 85589 00:19:45.023 15:10:15 -- common/autotest_common.sh@960 -- # wait 85589 00:19:45.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:45.023 15:10:15 -- host/timeout.sh@110 -- # bdevperf_pid=85824 00:19:45.023 15:10:15 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:45.023 15:10:15 -- host/timeout.sh@112 -- # waitforlisten 85824 /var/tmp/bdevperf.sock 00:19:45.023 15:10:15 -- common/autotest_common.sh@829 -- # '[' -z 85824 ']' 00:19:45.023 15:10:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:45.023 15:10:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:45.023 15:10:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:45.023 15:10:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:45.023 15:10:15 -- common/autotest_common.sh@10 -- # set +x 00:19:45.023 [2024-11-20 15:10:15.400753] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:45.023 [2024-11-20 15:10:15.401604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85824 ] 00:19:45.023 [2024-11-20 15:10:15.542724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.023 [2024-11-20 15:10:15.582758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.023 15:10:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:45.023 15:10:15 -- common/autotest_common.sh@862 -- # return 0 00:19:45.023 15:10:15 -- host/timeout.sh@116 -- # dtrace_pid=85831 00:19:45.023 15:10:15 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 85824 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:45.023 15:10:15 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:45.282 15:10:16 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:45.541 NVMe0n1 00:19:45.541 15:10:16 -- host/timeout.sh@124 -- # rpc_pid=85874 00:19:45.541 15:10:16 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:45.541 15:10:16 -- host/timeout.sh@125 -- # sleep 1 00:19:45.799 Running I/O for 10 seconds... 00:19:46.814 15:10:17 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:46.815 [2024-11-20 15:10:17.600314] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600555] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600579] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600593] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600605] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600617] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600629] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600664] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600680] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600692] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600704] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600717] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600730] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600742] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600755] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600767] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600779] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600792] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600805] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600818] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600831] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600844] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600857] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600871] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600885] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600898] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600914] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600926] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600939] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600958] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600971] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.600984] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.601007] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.601019] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.601032] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.601045] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.601057] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.601070] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.601082] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.601095] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.601108] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.601123] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.601136] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.601149] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.601162] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.601175] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.601187] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.601202] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.601216] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.601229] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.601243] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.601256] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.601269] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.601282] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.601296] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.815 [2024-11-20 15:10:17.601309] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.601323] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.601337] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.601351] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.601364] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.601377] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.601390] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.601405] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.601418] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.601432] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.601446] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.601458] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.601472] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.601485] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.601498] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.601511] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.601525] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.601538] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.601551] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.601565] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.601586] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.601600] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.601613] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.601626] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.602058] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.602269] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.602460] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.602653] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.602915] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.602941] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.602956] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.602970] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.602983] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.602996] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603010] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603024] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603038] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603051] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603064] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603078] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603092] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603106] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603119] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603132] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603146] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603159] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603173] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603186] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603200] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603229] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603243] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603257] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603271] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603284] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603302] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603314] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603330] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603343] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603358] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603370] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603384] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603397] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.816 [2024-11-20 15:10:17.603411] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.817 [2024-11-20 15:10:17.603424] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.817 [2024-11-20 15:10:17.603437] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.817 [2024-11-20 15:10:17.603450] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.817 [2024-11-20 15:10:17.603464] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.817 [2024-11-20 15:10:17.603479] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.817 [2024-11-20 15:10:17.603492] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.817 [2024-11-20 15:10:17.603505] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.817 [2024-11-20 15:10:17.603519] tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x778c20 is same with the state(5) to be set 00:19:46.817 [2024-11-20 15:10:17.603630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.603680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.603706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.603718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.603731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.603740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.603753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.603763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.603774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.603784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.603795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.603807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.603819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.603838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.603849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:56024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.603859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.603871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.603880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.603892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.603901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.603912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.603922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.603934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.603944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.603955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:118976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.603965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.603976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.603986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.603998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:43552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.604007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.604019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.604028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.604040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.604051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.604063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.604072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.604084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.604094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.604106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:67128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.604116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.604127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.604136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.604147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.604157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.604169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.604178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.604190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.604199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.604211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.604220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.604232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.604241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.604253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.817 [2024-11-20 15:10:17.604263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.817 [2024-11-20 15:10:17.604275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:48472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:52992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:35304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.818 [2024-11-20 15:10:17.604944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.818 [2024-11-20 15:10:17.604956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.604965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.604976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.604986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.604998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:54856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:114480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:117944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:127304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:105464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.819 [2024-11-20 15:10:17.605627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.819 [2024-11-20 15:10:17.605649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.820 [2024-11-20 15:10:17.605662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.820 [2024-11-20 15:10:17.605673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.820 [2024-11-20 15:10:17.605683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.820 [2024-11-20 15:10:17.605694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.820 [2024-11-20 15:10:17.605704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.820 [2024-11-20 15:10:17.605717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:119960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.820 [2024-11-20 15:10:17.605726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.820 [2024-11-20 15:10:17.605738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.820 [2024-11-20 15:10:17.605747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.820 [2024-11-20 15:10:17.605759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.820 [2024-11-20 15:10:17.605768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.820 [2024-11-20 15:10:17.605780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.820 [2024-11-20 15:10:17.605789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.820 [2024-11-20 15:10:17.605801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.820 [2024-11-20 15:10:17.605810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.820 [2024-11-20 15:10:17.605822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.820 [2024-11-20 15:10:17.605831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.820 [2024-11-20 15:10:17.605842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.820 [2024-11-20 15:10:17.605852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.820 [2024-11-20 15:10:17.605863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.820 [2024-11-20 15:10:17.605873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.820 [2024-11-20 15:10:17.605884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.820 [2024-11-20 15:10:17.605893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.820 [2024-11-20 15:10:17.605905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:91408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.820 [2024-11-20 15:10:17.605914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.820 [2024-11-20 15:10:17.605926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.820 [2024-11-20 15:10:17.605935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.820 [2024-11-20 15:10:17.605946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.820 [2024-11-20 15:10:17.605956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.820 [2024-11-20 15:10:17.605967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.820 [2024-11-20 15:10:17.605978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.820 [2024-11-20 15:10:17.605990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.820 [2024-11-20 15:10:17.605999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.820 [2024-11-20 15:10:17.606010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:105024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.820 [2024-11-20 15:10:17.606020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.820 [2024-11-20 15:10:17.606034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.820 [2024-11-20 15:10:17.606044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.820 [2024-11-20 15:10:17.606055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.820 [2024-11-20 15:10:17.606065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.820 [2024-11-20 15:10:17.606077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.820 [2024-11-20 15:10:17.606086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.820 [2024-11-20 15:10:17.606097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:58344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.820 [2024-11-20 15:10:17.606107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.820 [2024-11-20 15:10:17.606118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.820 [2024-11-20 15:10:17.606128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.820 [2024-11-20 15:10:17.606139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.820 [2024-11-20 15:10:17.606148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.820 [2024-11-20 15:10:17.606160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.820 [2024-11-20 15:10:17.606169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.820 [2024-11-20 15:10:17.606180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.821 [2024-11-20 15:10:17.606190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.821 [2024-11-20 15:10:17.606201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.821 [2024-11-20 15:10:17.606211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.821 [2024-11-20 15:10:17.606222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.821 [2024-11-20 15:10:17.606231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.821 [2024-11-20 15:10:17.606244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:44328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.821 [2024-11-20 15:10:17.606253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.821 [2024-11-20 15:10:17.606264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.821 [2024-11-20 15:10:17.606273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.821 [2024-11-20 15:10:17.606285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:90392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.821 [2024-11-20 15:10:17.606294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.821 [2024-11-20 15:10:17.606306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:104720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.821 [2024-11-20 15:10:17.606317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.821 [2024-11-20 15:10:17.606329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:69984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.821 [2024-11-20 15:10:17.606339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.821 [2024-11-20 15:10:17.606350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.821 [2024-11-20 15:10:17.606360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.821 [2024-11-20 15:10:17.606373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.821 [2024-11-20 15:10:17.606383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.821 [2024-11-20 15:10:17.606394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.821 [2024-11-20 15:10:17.606404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.821 [2024-11-20 15:10:17.606415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:46.821 [2024-11-20 15:10:17.606425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.821 [2024-11-20 15:10:17.606435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc259f0 is same with the state(5) to be set 00:19:46.821 [2024-11-20 15:10:17.606448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:46.821 [2024-11-20 15:10:17.606456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:46.821 [2024-11-20 15:10:17.606465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38976 len:8 PRP1 0x0 PRP2 0x0 00:19:46.821 [2024-11-20 15:10:17.606474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.821 [2024-11-20 15:10:17.606527] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc259f0 was disconnected and freed. reset controller. 00:19:46.821 [2024-11-20 15:10:17.606606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:46.821 [2024-11-20 15:10:17.606624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.821 [2024-11-20 15:10:17.606650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:46.821 [2024-11-20 15:10:17.606663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.821 [2024-11-20 15:10:17.606673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:46.821 [2024-11-20 15:10:17.606682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.821 [2024-11-20 15:10:17.606692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:46.821 [2024-11-20 15:10:17.606701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:46.821 [2024-11-20 15:10:17.606710] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2a470 is same with the state(5) to be set 00:19:46.821 [2024-11-20 15:10:17.607700] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:46.821 [2024-11-20 15:10:17.608295] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2a470 (9): Bad file descriptor 00:19:46.821 [2024-11-20 15:10:17.608889] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:46.821 [2024-11-20 15:10:17.609191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:46.821 [2024-11-20 15:10:17.609454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:46.821 [2024-11-20 15:10:17.609699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc2a470 with addr=10.0.0.2, port=4420 00:19:46.821 [2024-11-20 15:10:17.610150] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2a470 is same with the state(5) to be set 00:19:46.821 [2024-11-20 15:10:17.610632] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2a470 (9): Bad file descriptor 00:19:46.821 [2024-11-20 15:10:17.611129] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:46.821 [2024-11-20 15:10:17.611522] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:46.821 [2024-11-20 15:10:17.611974] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:46.821 [2024-11-20 15:10:17.612280] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:46.821 [2024-11-20 15:10:17.612563] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:47.079 15:10:17 -- host/timeout.sh@128 -- # wait 85874 00:19:48.977 [2024-11-20 15:10:19.613190] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:48.977 [2024-11-20 15:10:19.613617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:48.977 [2024-11-20 15:10:19.613936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:48.977 [2024-11-20 15:10:19.614175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc2a470 with addr=10.0.0.2, port=4420 00:19:48.977 [2024-11-20 15:10:19.614582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2a470 is same with the state(5) to be set 00:19:48.977 [2024-11-20 15:10:19.615028] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2a470 (9): Bad file descriptor 00:19:48.977 [2024-11-20 15:10:19.615062] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:48.977 [2024-11-20 15:10:19.615074] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:48.977 [2024-11-20 15:10:19.615086] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:48.977 [2024-11-20 15:10:19.615116] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:48.977 [2024-11-20 15:10:19.615129] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:50.877 [2024-11-20 15:10:21.615295] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:50.877 [2024-11-20 15:10:21.615392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:50.877 [2024-11-20 15:10:21.615453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:50.877 [2024-11-20 15:10:21.615471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc2a470 with addr=10.0.0.2, port=4420 00:19:50.877 [2024-11-20 15:10:21.615486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2a470 is same with the state(5) to be set 00:19:50.877 [2024-11-20 15:10:21.615512] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2a470 (9): Bad file descriptor 00:19:50.877 [2024-11-20 15:10:21.615532] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:50.877 [2024-11-20 15:10:21.615543] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:50.877 [2024-11-20 15:10:21.615554] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:50.877 [2024-11-20 15:10:21.615584] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:50.877 [2024-11-20 15:10:21.615596] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:53.410 [2024-11-20 15:10:23.615673] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:53.410 [2024-11-20 15:10:23.615744] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:53.410 [2024-11-20 15:10:23.615757] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:53.410 [2024-11-20 15:10:23.615769] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:53.410 [2024-11-20 15:10:23.615798] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:53.976 00:19:53.976 Latency(us) 00:19:53.976 [2024-11-20T15:10:24.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.976 [2024-11-20T15:10:24.780Z] Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:53.976 NVMe0n1 : 8.17 2083.73 8.14 15.67 0.00 61068.49 7864.32 7046430.72 00:19:53.976 [2024-11-20T15:10:24.780Z] =================================================================================================================== 00:19:53.976 [2024-11-20T15:10:24.780Z] Total : 2083.73 8.14 15.67 0.00 61068.49 7864.32 7046430.72 00:19:53.976 0 00:19:53.976 15:10:24 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:53.976 Attaching 5 probes... 00:19:53.976 1415.878335: reset bdev controller NVMe0 00:19:53.976 1416.996390: reconnect bdev controller NVMe0 00:19:53.976 3421.233467: reconnect delay bdev controller NVMe0 00:19:53.976 3421.257955: reconnect bdev controller NVMe0 00:19:53.976 5423.350314: reconnect delay bdev controller NVMe0 00:19:53.976 5423.371324: reconnect bdev controller NVMe0 00:19:53.976 7423.816093: reconnect delay bdev controller NVMe0 00:19:53.976 7423.852604: reconnect bdev controller NVMe0 00:19:53.976 15:10:24 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:53.976 15:10:24 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:53.976 15:10:24 -- host/timeout.sh@136 -- # kill 85831 00:19:53.976 15:10:24 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:53.976 15:10:24 -- host/timeout.sh@139 -- # killprocess 85824 00:19:53.976 15:10:24 -- common/autotest_common.sh@936 -- # '[' -z 85824 ']' 00:19:53.976 15:10:24 -- common/autotest_common.sh@940 -- # kill -0 85824 00:19:53.976 15:10:24 -- common/autotest_common.sh@941 -- # uname 00:19:53.976 15:10:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:53.976 15:10:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85824 00:19:53.976 killing process with pid 85824 00:19:53.976 Received shutdown signal, test time was about 8.233050 seconds 00:19:53.976 00:19:53.976 Latency(us) 00:19:53.976 [2024-11-20T15:10:24.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.976 [2024-11-20T15:10:24.780Z] =================================================================================================================== 00:19:53.976 [2024-11-20T15:10:24.780Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:53.976 15:10:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:53.976 15:10:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:53.976 15:10:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85824' 00:19:53.976 15:10:24 -- common/autotest_common.sh@955 -- # kill 85824 00:19:53.976 15:10:24 -- common/autotest_common.sh@960 -- # wait 85824 00:19:54.235 15:10:24 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:54.494 15:10:25 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:54.494 15:10:25 -- host/timeout.sh@145 -- # nvmftestfini 00:19:54.494 15:10:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:54.494 15:10:25 -- nvmf/common.sh@116 -- # sync 00:19:54.494 15:10:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:54.494 15:10:25 -- nvmf/common.sh@119 -- # set +e 00:19:54.494 15:10:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:54.494 15:10:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:54.494 rmmod nvme_tcp 00:19:54.494 rmmod nvme_fabrics 00:19:54.494 rmmod nvme_keyring 00:19:54.494 15:10:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:54.494 15:10:25 -- nvmf/common.sh@123 -- # set -e 00:19:54.494 15:10:25 -- nvmf/common.sh@124 -- # return 0 00:19:54.494 15:10:25 -- nvmf/common.sh@477 -- # '[' -n 85401 ']' 00:19:54.494 15:10:25 -- nvmf/common.sh@478 -- # killprocess 85401 00:19:54.494 15:10:25 -- common/autotest_common.sh@936 -- # '[' -z 85401 ']' 00:19:54.494 15:10:25 -- common/autotest_common.sh@940 -- # kill -0 85401 00:19:54.494 15:10:25 -- common/autotest_common.sh@941 -- # uname 00:19:54.495 15:10:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:54.495 15:10:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85401 00:19:54.495 killing process with pid 85401 00:19:54.495 15:10:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:54.495 15:10:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:54.495 15:10:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85401' 00:19:54.495 15:10:25 -- common/autotest_common.sh@955 -- # kill 85401 00:19:54.495 15:10:25 -- common/autotest_common.sh@960 -- # wait 85401 00:19:54.754 15:10:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:54.754 15:10:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:54.754 15:10:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:54.754 15:10:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:54.754 15:10:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:54.754 15:10:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.754 15:10:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:54.754 15:10:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.754 15:10:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:54.754 ************************************ 00:19:54.754 END TEST nvmf_timeout 00:19:54.754 ************************************ 00:19:54.754 00:19:54.754 real 0m45.264s 00:19:54.754 user 2m12.873s 00:19:54.754 sys 0m5.281s 00:19:54.754 15:10:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:54.754 15:10:25 -- common/autotest_common.sh@10 -- # set +x 00:19:54.754 15:10:25 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:19:54.754 15:10:25 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:19:54.754 15:10:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:54.754 15:10:25 -- common/autotest_common.sh@10 -- # set +x 00:19:54.754 15:10:25 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:19:54.754 ************************************ 00:19:54.754 END TEST nvmf_tcp 00:19:54.754 ************************************ 00:19:54.754 00:19:54.754 real 10m30.957s 00:19:54.754 user 29m36.175s 00:19:54.754 sys 3m21.442s 00:19:54.754 15:10:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:54.754 15:10:25 -- common/autotest_common.sh@10 -- # set +x 00:19:54.754 15:10:25 -- spdk/autotest.sh@283 -- # [[ 1 -eq 0 ]] 00:19:54.754 15:10:25 -- spdk/autotest.sh@287 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:54.754 15:10:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:54.754 15:10:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:54.754 15:10:25 -- common/autotest_common.sh@10 -- # set +x 00:19:54.754 ************************************ 00:19:54.754 START TEST nvmf_dif 00:19:54.754 ************************************ 00:19:54.754 15:10:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:55.013 * Looking for test storage... 00:19:55.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:55.013 15:10:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:55.013 15:10:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:55.013 15:10:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:55.013 15:10:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:55.013 15:10:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:55.013 15:10:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:55.013 15:10:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:55.013 15:10:25 -- scripts/common.sh@335 -- # IFS=.-: 00:19:55.013 15:10:25 -- scripts/common.sh@335 -- # read -ra ver1 00:19:55.013 15:10:25 -- scripts/common.sh@336 -- # IFS=.-: 00:19:55.013 15:10:25 -- scripts/common.sh@336 -- # read -ra ver2 00:19:55.013 15:10:25 -- scripts/common.sh@337 -- # local 'op=<' 00:19:55.013 15:10:25 -- scripts/common.sh@339 -- # ver1_l=2 00:19:55.013 15:10:25 -- scripts/common.sh@340 -- # ver2_l=1 00:19:55.013 15:10:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:55.013 15:10:25 -- scripts/common.sh@343 -- # case "$op" in 00:19:55.013 15:10:25 -- scripts/common.sh@344 -- # : 1 00:19:55.013 15:10:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:55.013 15:10:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:55.013 15:10:25 -- scripts/common.sh@364 -- # decimal 1 00:19:55.013 15:10:25 -- scripts/common.sh@352 -- # local d=1 00:19:55.013 15:10:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:55.013 15:10:25 -- scripts/common.sh@354 -- # echo 1 00:19:55.013 15:10:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:55.013 15:10:25 -- scripts/common.sh@365 -- # decimal 2 00:19:55.013 15:10:25 -- scripts/common.sh@352 -- # local d=2 00:19:55.013 15:10:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:55.013 15:10:25 -- scripts/common.sh@354 -- # echo 2 00:19:55.013 15:10:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:55.013 15:10:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:55.013 15:10:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:55.013 15:10:25 -- scripts/common.sh@367 -- # return 0 00:19:55.013 15:10:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:55.013 15:10:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:55.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.013 --rc genhtml_branch_coverage=1 00:19:55.013 --rc genhtml_function_coverage=1 00:19:55.013 --rc genhtml_legend=1 00:19:55.013 --rc geninfo_all_blocks=1 00:19:55.013 --rc geninfo_unexecuted_blocks=1 00:19:55.013 00:19:55.013 ' 00:19:55.013 15:10:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:55.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.013 --rc genhtml_branch_coverage=1 00:19:55.013 --rc genhtml_function_coverage=1 00:19:55.013 --rc genhtml_legend=1 00:19:55.013 --rc geninfo_all_blocks=1 00:19:55.014 --rc geninfo_unexecuted_blocks=1 00:19:55.014 00:19:55.014 ' 00:19:55.014 15:10:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:55.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.014 --rc genhtml_branch_coverage=1 00:19:55.014 --rc genhtml_function_coverage=1 00:19:55.014 --rc genhtml_legend=1 00:19:55.014 --rc geninfo_all_blocks=1 00:19:55.014 --rc geninfo_unexecuted_blocks=1 00:19:55.014 00:19:55.014 ' 00:19:55.014 15:10:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:55.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.014 --rc genhtml_branch_coverage=1 00:19:55.014 --rc genhtml_function_coverage=1 00:19:55.014 --rc genhtml_legend=1 00:19:55.014 --rc geninfo_all_blocks=1 00:19:55.014 --rc geninfo_unexecuted_blocks=1 00:19:55.014 00:19:55.014 ' 00:19:55.014 15:10:25 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:55.014 15:10:25 -- nvmf/common.sh@7 -- # uname -s 00:19:55.014 15:10:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:55.014 15:10:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:55.014 15:10:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:55.014 15:10:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:55.014 15:10:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:55.014 15:10:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:55.014 15:10:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:55.014 15:10:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:55.014 15:10:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:55.014 15:10:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:55.014 15:10:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:19:55.014 15:10:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:19:55.014 15:10:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:55.014 15:10:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:55.014 15:10:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:55.014 15:10:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:55.014 15:10:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:55.014 15:10:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:55.014 15:10:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:55.014 15:10:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.014 15:10:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.014 15:10:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.014 15:10:25 -- paths/export.sh@5 -- # export PATH 00:19:55.014 15:10:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.014 15:10:25 -- nvmf/common.sh@46 -- # : 0 00:19:55.014 15:10:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:55.014 15:10:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:55.014 15:10:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:55.014 15:10:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:55.014 15:10:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:55.014 15:10:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:55.014 15:10:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:55.014 15:10:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:55.014 15:10:25 -- target/dif.sh@15 -- # NULL_META=16 00:19:55.014 15:10:25 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:55.014 15:10:25 -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:55.014 15:10:25 -- target/dif.sh@15 -- # NULL_DIF=1 00:19:55.014 15:10:25 -- target/dif.sh@135 -- # nvmftestinit 00:19:55.014 15:10:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:55.014 15:10:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:55.014 15:10:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:55.014 15:10:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:55.014 15:10:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:55.014 15:10:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.014 15:10:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:55.014 15:10:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.014 15:10:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:55.014 15:10:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:55.014 15:10:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:55.014 15:10:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:55.014 15:10:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:55.014 15:10:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:55.014 15:10:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:55.014 15:10:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:55.014 15:10:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:55.014 15:10:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:55.014 15:10:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:55.014 15:10:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:55.014 15:10:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:55.014 15:10:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:55.014 15:10:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:55.014 15:10:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:55.014 15:10:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:55.014 15:10:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:55.014 15:10:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:55.014 15:10:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:55.014 Cannot find device "nvmf_tgt_br" 00:19:55.014 15:10:25 -- nvmf/common.sh@154 -- # true 00:19:55.014 15:10:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:55.014 Cannot find device "nvmf_tgt_br2" 00:19:55.014 15:10:25 -- nvmf/common.sh@155 -- # true 00:19:55.014 15:10:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:55.014 15:10:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:55.014 Cannot find device "nvmf_tgt_br" 00:19:55.014 15:10:25 -- nvmf/common.sh@157 -- # true 00:19:55.014 15:10:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:55.014 Cannot find device "nvmf_tgt_br2" 00:19:55.014 15:10:25 -- nvmf/common.sh@158 -- # true 00:19:55.014 15:10:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:55.274 15:10:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:55.274 15:10:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:55.274 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:55.274 15:10:25 -- nvmf/common.sh@161 -- # true 00:19:55.274 15:10:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:55.274 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:55.274 15:10:25 -- nvmf/common.sh@162 -- # true 00:19:55.274 15:10:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:55.274 15:10:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:55.274 15:10:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:55.274 15:10:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:55.274 15:10:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:55.274 15:10:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:55.274 15:10:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:55.274 15:10:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:55.274 15:10:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:55.274 15:10:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:55.274 15:10:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:55.274 15:10:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:55.274 15:10:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:55.274 15:10:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:55.274 15:10:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:55.274 15:10:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:55.274 15:10:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:55.274 15:10:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:55.274 15:10:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:55.274 15:10:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:55.274 15:10:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:55.274 15:10:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:55.274 15:10:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:55.274 15:10:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:55.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:55.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:19:55.274 00:19:55.274 --- 10.0.0.2 ping statistics --- 00:19:55.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.274 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:19:55.274 15:10:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:55.274 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:55.274 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:19:55.274 00:19:55.274 --- 10.0.0.3 ping statistics --- 00:19:55.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.274 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:19:55.274 15:10:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:55.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:55.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:19:55.274 00:19:55.274 --- 10.0.0.1 ping statistics --- 00:19:55.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.274 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:19:55.274 15:10:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:55.274 15:10:26 -- nvmf/common.sh@421 -- # return 0 00:19:55.274 15:10:26 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:19:55.274 15:10:26 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:55.533 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:55.792 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:55.792 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:55.792 15:10:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:55.792 15:10:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:55.792 15:10:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:55.792 15:10:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:55.792 15:10:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:55.792 15:10:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:55.792 15:10:26 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:55.792 15:10:26 -- target/dif.sh@137 -- # nvmfappstart 00:19:55.792 15:10:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:55.792 15:10:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:55.792 15:10:26 -- common/autotest_common.sh@10 -- # set +x 00:19:55.792 15:10:26 -- nvmf/common.sh@469 -- # nvmfpid=86322 00:19:55.792 15:10:26 -- nvmf/common.sh@470 -- # waitforlisten 86322 00:19:55.792 15:10:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:55.792 15:10:26 -- common/autotest_common.sh@829 -- # '[' -z 86322 ']' 00:19:55.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.792 15:10:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.792 15:10:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.792 15:10:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.792 15:10:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.792 15:10:26 -- common/autotest_common.sh@10 -- # set +x 00:19:55.792 [2024-11-20 15:10:26.489779] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:55.792 [2024-11-20 15:10:26.489879] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.052 [2024-11-20 15:10:26.628541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.052 [2024-11-20 15:10:26.668833] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:56.052 [2024-11-20 15:10:26.669008] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.052 [2024-11-20 15:10:26.669024] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.052 [2024-11-20 15:10:26.669035] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.052 [2024-11-20 15:10:26.669065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.052 15:10:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:56.052 15:10:26 -- common/autotest_common.sh@862 -- # return 0 00:19:56.052 15:10:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:56.052 15:10:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:56.052 15:10:26 -- common/autotest_common.sh@10 -- # set +x 00:19:56.052 15:10:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.052 15:10:26 -- target/dif.sh@139 -- # create_transport 00:19:56.052 15:10:26 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:56.052 15:10:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.052 15:10:26 -- common/autotest_common.sh@10 -- # set +x 00:19:56.052 [2024-11-20 15:10:26.825920] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.052 15:10:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.052 15:10:26 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:56.052 15:10:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:56.052 15:10:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:56.052 15:10:26 -- common/autotest_common.sh@10 -- # set +x 00:19:56.052 ************************************ 00:19:56.052 START TEST fio_dif_1_default 00:19:56.052 ************************************ 00:19:56.052 15:10:26 -- common/autotest_common.sh@1114 -- # fio_dif_1 00:19:56.052 15:10:26 -- target/dif.sh@86 -- # create_subsystems 0 00:19:56.052 15:10:26 -- target/dif.sh@28 -- # local sub 00:19:56.052 15:10:26 -- target/dif.sh@30 -- # for sub in "$@" 00:19:56.052 15:10:26 -- target/dif.sh@31 -- # create_subsystem 0 00:19:56.052 15:10:26 -- target/dif.sh@18 -- # local sub_id=0 00:19:56.052 15:10:26 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:56.052 15:10:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.052 15:10:26 -- common/autotest_common.sh@10 -- # set +x 00:19:56.052 bdev_null0 00:19:56.052 15:10:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.052 15:10:26 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:56.052 15:10:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.052 15:10:26 -- common/autotest_common.sh@10 -- # set +x 00:19:56.311 15:10:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.311 15:10:26 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:56.311 15:10:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.311 15:10:26 -- common/autotest_common.sh@10 -- # set +x 00:19:56.311 15:10:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.312 15:10:26 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:56.312 15:10:26 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.312 15:10:26 -- common/autotest_common.sh@10 -- # set +x 00:19:56.312 [2024-11-20 15:10:26.870157] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.312 15:10:26 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.312 15:10:26 -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:56.312 15:10:26 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:56.312 15:10:26 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:56.312 15:10:26 -- nvmf/common.sh@520 -- # config=() 00:19:56.312 15:10:26 -- nvmf/common.sh@520 -- # local subsystem config 00:19:56.312 15:10:26 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:56.312 15:10:26 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:56.312 { 00:19:56.312 "params": { 00:19:56.312 "name": "Nvme$subsystem", 00:19:56.312 "trtype": "$TEST_TRANSPORT", 00:19:56.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.312 "adrfam": "ipv4", 00:19:56.312 "trsvcid": "$NVMF_PORT", 00:19:56.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.312 "hdgst": ${hdgst:-false}, 00:19:56.312 "ddgst": ${ddgst:-false} 00:19:56.312 }, 00:19:56.312 "method": "bdev_nvme_attach_controller" 00:19:56.312 } 00:19:56.312 EOF 00:19:56.312 )") 00:19:56.312 15:10:26 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:56.312 15:10:26 -- target/dif.sh@82 -- # gen_fio_conf 00:19:56.312 15:10:26 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:56.312 15:10:26 -- target/dif.sh@54 -- # local file 00:19:56.312 15:10:26 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:19:56.312 15:10:26 -- target/dif.sh@56 -- # cat 00:19:56.312 15:10:26 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:56.312 15:10:26 -- common/autotest_common.sh@1328 -- # local sanitizers 00:19:56.312 15:10:26 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:56.312 15:10:26 -- nvmf/common.sh@542 -- # cat 00:19:56.312 15:10:26 -- common/autotest_common.sh@1330 -- # shift 00:19:56.312 15:10:26 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:19:56.312 15:10:26 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:56.312 15:10:26 -- common/autotest_common.sh@1334 -- # grep libasan 00:19:56.312 15:10:26 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:56.312 15:10:26 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:56.312 15:10:26 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:56.312 15:10:26 -- target/dif.sh@72 -- # (( file <= files )) 00:19:56.312 15:10:26 -- nvmf/common.sh@544 -- # jq . 00:19:56.312 15:10:26 -- nvmf/common.sh@545 -- # IFS=, 00:19:56.312 15:10:26 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:56.312 "params": { 00:19:56.312 "name": "Nvme0", 00:19:56.312 "trtype": "tcp", 00:19:56.312 "traddr": "10.0.0.2", 00:19:56.312 "adrfam": "ipv4", 00:19:56.312 "trsvcid": "4420", 00:19:56.312 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:56.312 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:56.312 "hdgst": false, 00:19:56.312 "ddgst": false 00:19:56.312 }, 00:19:56.312 "method": "bdev_nvme_attach_controller" 00:19:56.312 }' 00:19:56.312 15:10:26 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:56.312 15:10:26 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:56.312 15:10:26 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:19:56.312 15:10:26 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:56.312 15:10:26 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:19:56.312 15:10:26 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:19:56.312 15:10:26 -- common/autotest_common.sh@1334 -- # asan_lib= 00:19:56.312 15:10:26 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:19:56.312 15:10:26 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:56.312 15:10:26 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:56.312 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:56.312 fio-3.35 00:19:56.312 Starting 1 thread 00:19:56.878 [2024-11-20 15:10:27.398207] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:56.878 [2024-11-20 15:10:27.398785] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:06.907 00:20:06.907 filename0: (groupid=0, jobs=1): err= 0: pid=86377: Wed Nov 20 15:10:37 2024 00:20:06.907 read: IOPS=8693, BW=34.0MiB/s (35.6MB/s)(340MiB/10001msec) 00:20:06.907 slat (usec): min=7, max=1103, avg= 8.81, stdev= 4.32 00:20:06.907 clat (usec): min=387, max=4848, avg=434.12, stdev=37.12 00:20:06.907 lat (usec): min=394, max=4878, avg=442.93, stdev=37.88 00:20:06.907 clat percentiles (usec): 00:20:06.907 | 1.00th=[ 400], 5.00th=[ 404], 10.00th=[ 412], 20.00th=[ 420], 00:20:06.907 | 30.00th=[ 424], 40.00th=[ 429], 50.00th=[ 433], 60.00th=[ 437], 00:20:06.907 | 70.00th=[ 441], 80.00th=[ 449], 90.00th=[ 457], 95.00th=[ 465], 00:20:06.907 | 99.00th=[ 498], 99.50th=[ 519], 99.90th=[ 594], 99.95th=[ 611], 00:20:06.907 | 99.99th=[ 1139] 00:20:06.907 bw ( KiB/s): min=33536, max=35264, per=100.00%, avg=34787.37, stdev=422.63, samples=19 00:20:06.907 iops : min= 8384, max= 8816, avg=8696.84, stdev=105.66, samples=19 00:20:06.907 lat (usec) : 500=99.13%, 750=0.86%, 1000=0.01% 00:20:06.907 lat (msec) : 2=0.01%, 10=0.01% 00:20:06.907 cpu : usr=85.85%, sys=12.25%, ctx=19, majf=0, minf=8 00:20:06.907 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:06.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.907 issued rwts: total=86948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.907 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:06.907 00:20:06.907 Run status group 0 (all jobs): 00:20:06.907 READ: bw=34.0MiB/s (35.6MB/s), 34.0MiB/s-34.0MiB/s (35.6MB/s-35.6MB/s), io=340MiB (356MB), run=10001-10001msec 00:20:06.907 15:10:37 -- target/dif.sh@88 -- # destroy_subsystems 0 00:20:06.907 15:10:37 -- target/dif.sh@43 -- # local sub 00:20:06.907 15:10:37 -- target/dif.sh@45 -- # for sub in "$@" 00:20:06.907 15:10:37 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:06.907 15:10:37 -- target/dif.sh@36 -- # local sub_id=0 00:20:06.907 15:10:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:06.907 15:10:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.907 15:10:37 -- common/autotest_common.sh@10 -- # set +x 00:20:06.907 15:10:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.907 15:10:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:06.907 15:10:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.907 15:10:37 -- common/autotest_common.sh@10 -- # set +x 00:20:06.907 15:10:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.907 00:20:06.907 real 0m10.845s 00:20:06.907 user 0m9.104s 00:20:06.907 sys 0m1.447s 00:20:06.907 15:10:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:06.907 ************************************ 00:20:06.907 END TEST fio_dif_1_default 00:20:06.907 ************************************ 00:20:06.907 15:10:37 -- common/autotest_common.sh@10 -- # set +x 00:20:07.165 15:10:37 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:20:07.165 15:10:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:07.165 15:10:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:07.165 15:10:37 -- common/autotest_common.sh@10 -- # set +x 00:20:07.165 ************************************ 00:20:07.165 START TEST fio_dif_1_multi_subsystems 00:20:07.165 ************************************ 00:20:07.165 15:10:37 -- common/autotest_common.sh@1114 -- # fio_dif_1_multi_subsystems 00:20:07.165 15:10:37 -- target/dif.sh@92 -- # local files=1 00:20:07.165 15:10:37 -- target/dif.sh@94 -- # create_subsystems 0 1 00:20:07.165 15:10:37 -- target/dif.sh@28 -- # local sub 00:20:07.165 15:10:37 -- target/dif.sh@30 -- # for sub in "$@" 00:20:07.165 15:10:37 -- target/dif.sh@31 -- # create_subsystem 0 00:20:07.165 15:10:37 -- target/dif.sh@18 -- # local sub_id=0 00:20:07.165 15:10:37 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:07.165 15:10:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.165 15:10:37 -- common/autotest_common.sh@10 -- # set +x 00:20:07.165 bdev_null0 00:20:07.165 15:10:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.165 15:10:37 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:07.165 15:10:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.165 15:10:37 -- common/autotest_common.sh@10 -- # set +x 00:20:07.165 15:10:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.165 15:10:37 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:07.165 15:10:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.165 15:10:37 -- common/autotest_common.sh@10 -- # set +x 00:20:07.165 15:10:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.165 15:10:37 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:07.165 15:10:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.165 15:10:37 -- common/autotest_common.sh@10 -- # set +x 00:20:07.165 [2024-11-20 15:10:37.765173] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.165 15:10:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.165 15:10:37 -- target/dif.sh@30 -- # for sub in "$@" 00:20:07.165 15:10:37 -- target/dif.sh@31 -- # create_subsystem 1 00:20:07.165 15:10:37 -- target/dif.sh@18 -- # local sub_id=1 00:20:07.165 15:10:37 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:07.165 15:10:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.165 15:10:37 -- common/autotest_common.sh@10 -- # set +x 00:20:07.165 bdev_null1 00:20:07.165 15:10:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.165 15:10:37 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:07.165 15:10:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.165 15:10:37 -- common/autotest_common.sh@10 -- # set +x 00:20:07.165 15:10:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.165 15:10:37 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:07.165 15:10:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.165 15:10:37 -- common/autotest_common.sh@10 -- # set +x 00:20:07.165 15:10:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.165 15:10:37 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:07.165 15:10:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.165 15:10:37 -- common/autotest_common.sh@10 -- # set +x 00:20:07.165 15:10:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.165 15:10:37 -- target/dif.sh@95 -- # fio /dev/fd/62 00:20:07.165 15:10:37 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:20:07.165 15:10:37 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:07.165 15:10:37 -- nvmf/common.sh@520 -- # config=() 00:20:07.165 15:10:37 -- nvmf/common.sh@520 -- # local subsystem config 00:20:07.165 15:10:37 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:07.165 15:10:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:07.165 15:10:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:07.165 { 00:20:07.165 "params": { 00:20:07.165 "name": "Nvme$subsystem", 00:20:07.165 "trtype": "$TEST_TRANSPORT", 00:20:07.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:07.165 "adrfam": "ipv4", 00:20:07.165 "trsvcid": "$NVMF_PORT", 00:20:07.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:07.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:07.165 "hdgst": ${hdgst:-false}, 00:20:07.165 "ddgst": ${ddgst:-false} 00:20:07.165 }, 00:20:07.165 "method": "bdev_nvme_attach_controller" 00:20:07.165 } 00:20:07.165 EOF 00:20:07.165 )") 00:20:07.165 15:10:37 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:07.165 15:10:37 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:07.165 15:10:37 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:07.165 15:10:37 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:07.165 15:10:37 -- target/dif.sh@82 -- # gen_fio_conf 00:20:07.165 15:10:37 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:07.165 15:10:37 -- common/autotest_common.sh@1330 -- # shift 00:20:07.165 15:10:37 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:07.165 15:10:37 -- target/dif.sh@54 -- # local file 00:20:07.165 15:10:37 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:07.165 15:10:37 -- target/dif.sh@56 -- # cat 00:20:07.165 15:10:37 -- nvmf/common.sh@542 -- # cat 00:20:07.166 15:10:37 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:07.166 15:10:37 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:07.166 15:10:37 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:07.166 15:10:37 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:07.166 15:10:37 -- target/dif.sh@72 -- # (( file <= files )) 00:20:07.166 15:10:37 -- target/dif.sh@73 -- # cat 00:20:07.166 15:10:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:07.166 15:10:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:07.166 { 00:20:07.166 "params": { 00:20:07.166 "name": "Nvme$subsystem", 00:20:07.166 "trtype": "$TEST_TRANSPORT", 00:20:07.166 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:07.166 "adrfam": "ipv4", 00:20:07.166 "trsvcid": "$NVMF_PORT", 00:20:07.166 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:07.166 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:07.166 "hdgst": ${hdgst:-false}, 00:20:07.166 "ddgst": ${ddgst:-false} 00:20:07.166 }, 00:20:07.166 "method": "bdev_nvme_attach_controller" 00:20:07.166 } 00:20:07.166 EOF 00:20:07.166 )") 00:20:07.166 15:10:37 -- target/dif.sh@72 -- # (( file++ )) 00:20:07.166 15:10:37 -- target/dif.sh@72 -- # (( file <= files )) 00:20:07.166 15:10:37 -- nvmf/common.sh@542 -- # cat 00:20:07.166 15:10:37 -- nvmf/common.sh@544 -- # jq . 00:20:07.166 15:10:37 -- nvmf/common.sh@545 -- # IFS=, 00:20:07.166 15:10:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:07.166 "params": { 00:20:07.166 "name": "Nvme0", 00:20:07.166 "trtype": "tcp", 00:20:07.166 "traddr": "10.0.0.2", 00:20:07.166 "adrfam": "ipv4", 00:20:07.166 "trsvcid": "4420", 00:20:07.166 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:07.166 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:07.166 "hdgst": false, 00:20:07.166 "ddgst": false 00:20:07.166 }, 00:20:07.166 "method": "bdev_nvme_attach_controller" 00:20:07.166 },{ 00:20:07.166 "params": { 00:20:07.166 "name": "Nvme1", 00:20:07.166 "trtype": "tcp", 00:20:07.166 "traddr": "10.0.0.2", 00:20:07.166 "adrfam": "ipv4", 00:20:07.166 "trsvcid": "4420", 00:20:07.166 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.166 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:07.166 "hdgst": false, 00:20:07.166 "ddgst": false 00:20:07.166 }, 00:20:07.166 "method": "bdev_nvme_attach_controller" 00:20:07.166 }' 00:20:07.166 15:10:37 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:07.166 15:10:37 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:07.166 15:10:37 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:07.166 15:10:37 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:07.166 15:10:37 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:07.166 15:10:37 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:07.166 15:10:37 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:07.166 15:10:37 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:07.166 15:10:37 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:07.166 15:10:37 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:07.424 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:07.424 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:20:07.424 fio-3.35 00:20:07.424 Starting 2 threads 00:20:07.683 [2024-11-20 15:10:38.390115] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:07.683 [2024-11-20 15:10:38.390182] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:19.878 00:20:19.878 filename0: (groupid=0, jobs=1): err= 0: pid=86543: Wed Nov 20 15:10:48 2024 00:20:19.878 read: IOPS=4819, BW=18.8MiB/s (19.7MB/s)(188MiB/10001msec) 00:20:19.878 slat (usec): min=7, max=1166, avg=13.59, stdev= 6.19 00:20:19.878 clat (usec): min=430, max=4823, avg=792.98, stdev=58.29 00:20:19.878 lat (usec): min=438, max=4850, avg=806.56, stdev=59.01 00:20:19.878 clat percentiles (usec): 00:20:19.878 | 1.00th=[ 701], 5.00th=[ 725], 10.00th=[ 742], 20.00th=[ 766], 00:20:19.878 | 30.00th=[ 775], 40.00th=[ 783], 50.00th=[ 791], 60.00th=[ 799], 00:20:19.878 | 70.00th=[ 807], 80.00th=[ 816], 90.00th=[ 832], 95.00th=[ 848], 00:20:19.878 | 99.00th=[ 1012], 99.50th=[ 1057], 99.90th=[ 1106], 99.95th=[ 1123], 00:20:19.878 | 99.99th=[ 1205] 00:20:19.878 bw ( KiB/s): min=18912, max=19584, per=50.00%, avg=19280.84, stdev=182.70, samples=19 00:20:19.878 iops : min= 4728, max= 4896, avg=4820.21, stdev=45.67, samples=19 00:20:19.878 lat (usec) : 500=0.02%, 750=11.96%, 1000=86.94% 00:20:19.878 lat (msec) : 2=1.08%, 10=0.01% 00:20:19.878 cpu : usr=90.20%, sys=8.32%, ctx=15, majf=0, minf=0 00:20:19.878 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:19.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.878 issued rwts: total=48204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.878 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:19.878 filename1: (groupid=0, jobs=1): err= 0: pid=86544: Wed Nov 20 15:10:48 2024 00:20:19.878 read: IOPS=4819, BW=18.8MiB/s (19.7MB/s)(188MiB/10001msec) 00:20:19.878 slat (nsec): min=7552, max=63185, avg=13597.68, stdev=3312.25 00:20:19.878 clat (usec): min=409, max=5048, avg=792.09, stdev=55.23 00:20:19.878 lat (usec): min=417, max=5077, avg=805.69, stdev=55.58 00:20:19.878 clat percentiles (usec): 00:20:19.878 | 1.00th=[ 734], 5.00th=[ 750], 10.00th=[ 758], 20.00th=[ 766], 00:20:19.878 | 30.00th=[ 775], 40.00th=[ 783], 50.00th=[ 791], 60.00th=[ 791], 00:20:19.878 | 70.00th=[ 799], 80.00th=[ 807], 90.00th=[ 824], 95.00th=[ 832], 00:20:19.878 | 99.00th=[ 1004], 99.50th=[ 1057], 99.90th=[ 1106], 99.95th=[ 1139], 00:20:19.878 | 99.99th=[ 1352] 00:20:19.878 bw ( KiB/s): min=18912, max=19584, per=50.01%, avg=19282.53, stdev=187.30, samples=19 00:20:19.878 iops : min= 4728, max= 4896, avg=4820.63, stdev=46.82, samples=19 00:20:19.878 lat (usec) : 500=0.02%, 750=4.91%, 1000=94.03% 00:20:19.878 lat (msec) : 2=1.04%, 10=0.01% 00:20:19.878 cpu : usr=89.98%, sys=8.59%, ctx=86, majf=0, minf=0 00:20:19.878 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:19.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:19.878 issued rwts: total=48204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:19.878 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:19.878 00:20:19.878 Run status group 0 (all jobs): 00:20:19.878 READ: bw=37.7MiB/s (39.5MB/s), 18.8MiB/s-18.8MiB/s (19.7MB/s-19.7MB/s), io=377MiB (395MB), run=10001-10001msec 00:20:19.878 15:10:48 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:19.878 15:10:48 -- target/dif.sh@43 -- # local sub 00:20:19.878 15:10:48 -- target/dif.sh@45 -- # for sub in "$@" 00:20:19.878 15:10:48 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:19.878 15:10:48 -- target/dif.sh@36 -- # local sub_id=0 00:20:19.878 15:10:48 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:19.878 15:10:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.878 15:10:48 -- common/autotest_common.sh@10 -- # set +x 00:20:19.878 15:10:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.878 15:10:48 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:19.878 15:10:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.878 15:10:48 -- common/autotest_common.sh@10 -- # set +x 00:20:19.878 15:10:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.878 15:10:48 -- target/dif.sh@45 -- # for sub in "$@" 00:20:19.878 15:10:48 -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:19.878 15:10:48 -- target/dif.sh@36 -- # local sub_id=1 00:20:19.878 15:10:48 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:19.878 15:10:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.878 15:10:48 -- common/autotest_common.sh@10 -- # set +x 00:20:19.878 15:10:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.878 15:10:48 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:19.878 15:10:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.878 15:10:48 -- common/autotest_common.sh@10 -- # set +x 00:20:19.878 15:10:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.878 00:20:19.878 real 0m10.942s 00:20:19.878 user 0m18.655s 00:20:19.878 sys 0m1.909s 00:20:19.878 15:10:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:19.878 ************************************ 00:20:19.878 END TEST fio_dif_1_multi_subsystems 00:20:19.878 ************************************ 00:20:19.878 15:10:48 -- common/autotest_common.sh@10 -- # set +x 00:20:19.878 15:10:48 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:19.878 15:10:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:19.878 15:10:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:19.878 15:10:48 -- common/autotest_common.sh@10 -- # set +x 00:20:19.878 ************************************ 00:20:19.878 START TEST fio_dif_rand_params 00:20:19.878 ************************************ 00:20:19.878 15:10:48 -- common/autotest_common.sh@1114 -- # fio_dif_rand_params 00:20:19.878 15:10:48 -- target/dif.sh@100 -- # local NULL_DIF 00:20:19.878 15:10:48 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:19.879 15:10:48 -- target/dif.sh@103 -- # NULL_DIF=3 00:20:19.879 15:10:48 -- target/dif.sh@103 -- # bs=128k 00:20:19.879 15:10:48 -- target/dif.sh@103 -- # numjobs=3 00:20:19.879 15:10:48 -- target/dif.sh@103 -- # iodepth=3 00:20:19.879 15:10:48 -- target/dif.sh@103 -- # runtime=5 00:20:19.879 15:10:48 -- target/dif.sh@105 -- # create_subsystems 0 00:20:19.879 15:10:48 -- target/dif.sh@28 -- # local sub 00:20:19.879 15:10:48 -- target/dif.sh@30 -- # for sub in "$@" 00:20:19.879 15:10:48 -- target/dif.sh@31 -- # create_subsystem 0 00:20:19.879 15:10:48 -- target/dif.sh@18 -- # local sub_id=0 00:20:19.879 15:10:48 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:19.879 15:10:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.879 15:10:48 -- common/autotest_common.sh@10 -- # set +x 00:20:19.879 bdev_null0 00:20:19.879 15:10:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.879 15:10:48 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:19.879 15:10:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.879 15:10:48 -- common/autotest_common.sh@10 -- # set +x 00:20:19.879 15:10:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.879 15:10:48 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:19.879 15:10:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.879 15:10:48 -- common/autotest_common.sh@10 -- # set +x 00:20:19.879 15:10:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.879 15:10:48 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:19.879 15:10:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.879 15:10:48 -- common/autotest_common.sh@10 -- # set +x 00:20:19.879 [2024-11-20 15:10:48.759385] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.879 15:10:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.879 15:10:48 -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:19.879 15:10:48 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:19.879 15:10:48 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:19.879 15:10:48 -- nvmf/common.sh@520 -- # config=() 00:20:19.879 15:10:48 -- nvmf/common.sh@520 -- # local subsystem config 00:20:19.879 15:10:48 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:19.879 15:10:48 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:19.879 { 00:20:19.879 "params": { 00:20:19.879 "name": "Nvme$subsystem", 00:20:19.879 "trtype": "$TEST_TRANSPORT", 00:20:19.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.879 "adrfam": "ipv4", 00:20:19.879 "trsvcid": "$NVMF_PORT", 00:20:19.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.879 "hdgst": ${hdgst:-false}, 00:20:19.879 "ddgst": ${ddgst:-false} 00:20:19.879 }, 00:20:19.879 "method": "bdev_nvme_attach_controller" 00:20:19.879 } 00:20:19.879 EOF 00:20:19.879 )") 00:20:19.879 15:10:48 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:19.879 15:10:48 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:19.879 15:10:48 -- target/dif.sh@82 -- # gen_fio_conf 00:20:19.879 15:10:48 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:19.879 15:10:48 -- target/dif.sh@54 -- # local file 00:20:19.879 15:10:48 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:19.879 15:10:48 -- target/dif.sh@56 -- # cat 00:20:19.879 15:10:48 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:19.879 15:10:48 -- nvmf/common.sh@542 -- # cat 00:20:19.879 15:10:48 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:19.879 15:10:48 -- common/autotest_common.sh@1330 -- # shift 00:20:19.879 15:10:48 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:19.879 15:10:48 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:19.879 15:10:48 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:19.879 15:10:48 -- target/dif.sh@72 -- # (( file <= files )) 00:20:19.879 15:10:48 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:19.879 15:10:48 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:19.879 15:10:48 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:19.879 15:10:48 -- nvmf/common.sh@544 -- # jq . 00:20:19.879 15:10:48 -- nvmf/common.sh@545 -- # IFS=, 00:20:19.879 15:10:48 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:19.879 "params": { 00:20:19.879 "name": "Nvme0", 00:20:19.879 "trtype": "tcp", 00:20:19.879 "traddr": "10.0.0.2", 00:20:19.879 "adrfam": "ipv4", 00:20:19.879 "trsvcid": "4420", 00:20:19.879 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:19.879 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:19.879 "hdgst": false, 00:20:19.879 "ddgst": false 00:20:19.879 }, 00:20:19.879 "method": "bdev_nvme_attach_controller" 00:20:19.879 }' 00:20:19.879 15:10:48 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:19.879 15:10:48 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:19.879 15:10:48 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:19.879 15:10:48 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:19.879 15:10:48 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:19.879 15:10:48 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:19.879 15:10:48 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:19.879 15:10:48 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:19.879 15:10:48 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:19.879 15:10:48 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:19.879 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:19.879 ... 00:20:19.879 fio-3.35 00:20:19.879 Starting 3 threads 00:20:19.879 [2024-11-20 15:10:49.297831] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:19.879 [2024-11-20 15:10:49.297946] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:24.068 00:20:24.068 filename0: (groupid=0, jobs=1): err= 0: pid=86694: Wed Nov 20 15:10:54 2024 00:20:24.068 read: IOPS=256, BW=32.0MiB/s (33.6MB/s)(160MiB/5003msec) 00:20:24.068 slat (nsec): min=3728, max=38257, avg=15967.79, stdev=4762.24 00:20:24.068 clat (usec): min=11411, max=18044, avg=11679.21, stdev=391.61 00:20:24.068 lat (usec): min=11429, max=18064, avg=11695.18, stdev=391.64 00:20:24.068 clat percentiles (usec): 00:20:24.068 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11600], 00:20:24.068 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11600], 60.00th=[11600], 00:20:24.068 | 70.00th=[11731], 80.00th=[11731], 90.00th=[11863], 95.00th=[11863], 00:20:24.068 | 99.00th=[12911], 99.50th=[14091], 99.90th=[17957], 99.95th=[17957], 00:20:24.068 | 99.99th=[17957] 00:20:24.068 bw ( KiB/s): min=32256, max=33024, per=33.31%, avg=32768.00, stdev=384.00, samples=9 00:20:24.068 iops : min= 252, max= 258, avg=256.00, stdev= 3.00, samples=9 00:20:24.068 lat (msec) : 20=100.00% 00:20:24.068 cpu : usr=91.76%, sys=7.60%, ctx=6, majf=0, minf=8 00:20:24.068 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:24.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.068 issued rwts: total=1281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.068 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:24.068 filename0: (groupid=0, jobs=1): err= 0: pid=86695: Wed Nov 20 15:10:54 2024 00:20:24.068 read: IOPS=256, BW=32.0MiB/s (33.6MB/s)(160MiB/5002msec) 00:20:24.068 slat (nsec): min=3745, max=45438, avg=16340.24, stdev=4408.53 00:20:24.068 clat (usec): min=11418, max=17526, avg=11677.96, stdev=371.69 00:20:24.068 lat (usec): min=11426, max=17538, avg=11694.30, stdev=371.41 00:20:24.068 clat percentiles (usec): 00:20:24.068 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11600], 00:20:24.068 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11600], 60.00th=[11600], 00:20:24.068 | 70.00th=[11731], 80.00th=[11731], 90.00th=[11731], 95.00th=[11863], 00:20:24.068 | 99.00th=[12911], 99.50th=[14091], 99.90th=[17433], 99.95th=[17433], 00:20:24.068 | 99.99th=[17433] 00:20:24.068 bw ( KiB/s): min=32256, max=33024, per=33.31%, avg=32768.00, stdev=384.00, samples=9 00:20:24.068 iops : min= 252, max= 258, avg=256.00, stdev= 3.00, samples=9 00:20:24.068 lat (msec) : 20=100.00% 00:20:24.068 cpu : usr=92.18%, sys=7.26%, ctx=9, majf=0, minf=9 00:20:24.068 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:24.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.068 issued rwts: total=1281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.068 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:24.068 filename0: (groupid=0, jobs=1): err= 0: pid=86696: Wed Nov 20 15:10:54 2024 00:20:24.068 read: IOPS=256, BW=32.1MiB/s (33.6MB/s)(161MiB/5004msec) 00:20:24.068 slat (nsec): min=6911, max=36831, avg=16444.51, stdev=4240.43 00:20:24.068 clat (usec): min=4105, max=14655, avg=11652.73, stdev=459.26 00:20:24.068 lat (usec): min=4113, max=14676, avg=11669.17, stdev=459.53 00:20:24.068 clat percentiles (usec): 00:20:24.068 | 1.00th=[11469], 5.00th=[11469], 10.00th=[11469], 20.00th=[11600], 00:20:24.068 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11600], 60.00th=[11600], 00:20:24.068 | 70.00th=[11731], 80.00th=[11731], 90.00th=[11731], 95.00th=[11863], 00:20:24.068 | 99.00th=[12911], 99.50th=[14091], 99.90th=[14615], 99.95th=[14615], 00:20:24.068 | 99.99th=[14615] 00:20:24.068 bw ( KiB/s): min=32256, max=33024, per=33.31%, avg=32768.00, stdev=384.00, samples=9 00:20:24.068 iops : min= 252, max= 258, avg=256.00, stdev= 3.00, samples=9 00:20:24.068 lat (msec) : 10=0.23%, 20=99.77% 00:20:24.068 cpu : usr=91.90%, sys=7.52%, ctx=7, majf=0, minf=9 00:20:24.068 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:24.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.068 issued rwts: total=1284,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.068 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:24.068 00:20:24.068 Run status group 0 (all jobs): 00:20:24.068 READ: bw=96.1MiB/s (101MB/s), 32.0MiB/s-32.1MiB/s (33.6MB/s-33.6MB/s), io=481MiB (504MB), run=5002-5004msec 00:20:24.068 15:10:54 -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:24.068 15:10:54 -- target/dif.sh@43 -- # local sub 00:20:24.068 15:10:54 -- target/dif.sh@45 -- # for sub in "$@" 00:20:24.068 15:10:54 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:24.068 15:10:54 -- target/dif.sh@36 -- # local sub_id=0 00:20:24.068 15:10:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:24.068 15:10:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.068 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:20:24.068 15:10:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.068 15:10:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:24.068 15:10:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.068 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:20:24.068 15:10:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.068 15:10:54 -- target/dif.sh@109 -- # NULL_DIF=2 00:20:24.068 15:10:54 -- target/dif.sh@109 -- # bs=4k 00:20:24.068 15:10:54 -- target/dif.sh@109 -- # numjobs=8 00:20:24.068 15:10:54 -- target/dif.sh@109 -- # iodepth=16 00:20:24.068 15:10:54 -- target/dif.sh@109 -- # runtime= 00:20:24.068 15:10:54 -- target/dif.sh@109 -- # files=2 00:20:24.068 15:10:54 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:24.068 15:10:54 -- target/dif.sh@28 -- # local sub 00:20:24.068 15:10:54 -- target/dif.sh@30 -- # for sub in "$@" 00:20:24.068 15:10:54 -- target/dif.sh@31 -- # create_subsystem 0 00:20:24.068 15:10:54 -- target/dif.sh@18 -- # local sub_id=0 00:20:24.068 15:10:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:24.068 15:10:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.068 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:20:24.068 bdev_null0 00:20:24.068 15:10:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.068 15:10:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:24.068 15:10:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.068 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:20:24.068 15:10:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.068 15:10:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:24.068 15:10:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.068 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:20:24.068 15:10:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.068 15:10:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:24.068 15:10:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.068 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:20:24.068 [2024-11-20 15:10:54.598188] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.068 15:10:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.068 15:10:54 -- target/dif.sh@30 -- # for sub in "$@" 00:20:24.068 15:10:54 -- target/dif.sh@31 -- # create_subsystem 1 00:20:24.068 15:10:54 -- target/dif.sh@18 -- # local sub_id=1 00:20:24.068 15:10:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:24.068 15:10:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.068 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:20:24.068 bdev_null1 00:20:24.068 15:10:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.068 15:10:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:24.068 15:10:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.068 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:20:24.068 15:10:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.068 15:10:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:24.068 15:10:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.068 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:20:24.068 15:10:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.068 15:10:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:24.068 15:10:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.068 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:20:24.068 15:10:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.068 15:10:54 -- target/dif.sh@30 -- # for sub in "$@" 00:20:24.068 15:10:54 -- target/dif.sh@31 -- # create_subsystem 2 00:20:24.068 15:10:54 -- target/dif.sh@18 -- # local sub_id=2 00:20:24.068 15:10:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:24.068 15:10:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.068 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:20:24.068 bdev_null2 00:20:24.068 15:10:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.068 15:10:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:24.068 15:10:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.068 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:20:24.068 15:10:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.068 15:10:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:24.068 15:10:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.068 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:20:24.068 15:10:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.068 15:10:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:24.068 15:10:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.069 15:10:54 -- common/autotest_common.sh@10 -- # set +x 00:20:24.069 15:10:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.069 15:10:54 -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:24.069 15:10:54 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:24.069 15:10:54 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:24.069 15:10:54 -- nvmf/common.sh@520 -- # config=() 00:20:24.069 15:10:54 -- nvmf/common.sh@520 -- # local subsystem config 00:20:24.069 15:10:54 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:24.069 15:10:54 -- target/dif.sh@82 -- # gen_fio_conf 00:20:24.069 15:10:54 -- target/dif.sh@54 -- # local file 00:20:24.069 15:10:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:24.069 15:10:54 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:24.069 15:10:54 -- target/dif.sh@56 -- # cat 00:20:24.069 15:10:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:24.069 { 00:20:24.069 "params": { 00:20:24.069 "name": "Nvme$subsystem", 00:20:24.069 "trtype": "$TEST_TRANSPORT", 00:20:24.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:24.069 "adrfam": "ipv4", 00:20:24.069 "trsvcid": "$NVMF_PORT", 00:20:24.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:24.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:24.069 "hdgst": ${hdgst:-false}, 00:20:24.069 "ddgst": ${ddgst:-false} 00:20:24.069 }, 00:20:24.069 "method": "bdev_nvme_attach_controller" 00:20:24.069 } 00:20:24.069 EOF 00:20:24.069 )") 00:20:24.069 15:10:54 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:24.069 15:10:54 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:24.069 15:10:54 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:24.069 15:10:54 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:24.069 15:10:54 -- common/autotest_common.sh@1330 -- # shift 00:20:24.069 15:10:54 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:24.069 15:10:54 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:24.069 15:10:54 -- nvmf/common.sh@542 -- # cat 00:20:24.069 15:10:54 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:24.069 15:10:54 -- target/dif.sh@72 -- # (( file <= files )) 00:20:24.069 15:10:54 -- target/dif.sh@73 -- # cat 00:20:24.069 15:10:54 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:24.069 15:10:54 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:24.069 15:10:54 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:24.069 15:10:54 -- target/dif.sh@72 -- # (( file++ )) 00:20:24.069 15:10:54 -- target/dif.sh@72 -- # (( file <= files )) 00:20:24.069 15:10:54 -- target/dif.sh@73 -- # cat 00:20:24.069 15:10:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:24.069 15:10:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:24.069 { 00:20:24.069 "params": { 00:20:24.069 "name": "Nvme$subsystem", 00:20:24.069 "trtype": "$TEST_TRANSPORT", 00:20:24.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:24.069 "adrfam": "ipv4", 00:20:24.069 "trsvcid": "$NVMF_PORT", 00:20:24.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:24.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:24.069 "hdgst": ${hdgst:-false}, 00:20:24.069 "ddgst": ${ddgst:-false} 00:20:24.069 }, 00:20:24.069 "method": "bdev_nvme_attach_controller" 00:20:24.069 } 00:20:24.069 EOF 00:20:24.069 )") 00:20:24.069 15:10:54 -- target/dif.sh@72 -- # (( file++ )) 00:20:24.069 15:10:54 -- target/dif.sh@72 -- # (( file <= files )) 00:20:24.069 15:10:54 -- nvmf/common.sh@542 -- # cat 00:20:24.069 15:10:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:24.069 15:10:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:24.069 { 00:20:24.069 "params": { 00:20:24.069 "name": "Nvme$subsystem", 00:20:24.069 "trtype": "$TEST_TRANSPORT", 00:20:24.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:24.069 "adrfam": "ipv4", 00:20:24.069 "trsvcid": "$NVMF_PORT", 00:20:24.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:24.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:24.069 "hdgst": ${hdgst:-false}, 00:20:24.069 "ddgst": ${ddgst:-false} 00:20:24.069 }, 00:20:24.069 "method": "bdev_nvme_attach_controller" 00:20:24.069 } 00:20:24.069 EOF 00:20:24.069 )") 00:20:24.069 15:10:54 -- nvmf/common.sh@542 -- # cat 00:20:24.069 15:10:54 -- nvmf/common.sh@544 -- # jq . 00:20:24.069 15:10:54 -- nvmf/common.sh@545 -- # IFS=, 00:20:24.069 15:10:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:24.069 "params": { 00:20:24.069 "name": "Nvme0", 00:20:24.069 "trtype": "tcp", 00:20:24.069 "traddr": "10.0.0.2", 00:20:24.069 "adrfam": "ipv4", 00:20:24.069 "trsvcid": "4420", 00:20:24.069 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:24.069 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:24.069 "hdgst": false, 00:20:24.069 "ddgst": false 00:20:24.069 }, 00:20:24.069 "method": "bdev_nvme_attach_controller" 00:20:24.069 },{ 00:20:24.069 "params": { 00:20:24.069 "name": "Nvme1", 00:20:24.069 "trtype": "tcp", 00:20:24.069 "traddr": "10.0.0.2", 00:20:24.069 "adrfam": "ipv4", 00:20:24.069 "trsvcid": "4420", 00:20:24.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.069 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:24.069 "hdgst": false, 00:20:24.069 "ddgst": false 00:20:24.069 }, 00:20:24.069 "method": "bdev_nvme_attach_controller" 00:20:24.069 },{ 00:20:24.069 "params": { 00:20:24.069 "name": "Nvme2", 00:20:24.069 "trtype": "tcp", 00:20:24.069 "traddr": "10.0.0.2", 00:20:24.069 "adrfam": "ipv4", 00:20:24.069 "trsvcid": "4420", 00:20:24.069 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:24.069 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:24.069 "hdgst": false, 00:20:24.069 "ddgst": false 00:20:24.069 }, 00:20:24.069 "method": "bdev_nvme_attach_controller" 00:20:24.069 }' 00:20:24.069 15:10:54 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:24.069 15:10:54 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:24.069 15:10:54 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:24.069 15:10:54 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:24.069 15:10:54 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:24.069 15:10:54 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:24.069 15:10:54 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:24.069 15:10:54 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:24.069 15:10:54 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:24.069 15:10:54 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:24.329 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:24.329 ... 00:20:24.329 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:24.329 ... 00:20:24.329 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:24.329 ... 00:20:24.329 fio-3.35 00:20:24.329 Starting 24 threads 00:20:24.587 [2024-11-20 15:10:55.351804] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:24.587 [2024-11-20 15:10:55.351890] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:36.844 00:20:36.844 filename0: (groupid=0, jobs=1): err= 0: pid=86792: Wed Nov 20 15:11:05 2024 00:20:36.844 read: IOPS=173, BW=694KiB/s (711kB/s)(6972KiB/10039msec) 00:20:36.844 slat (usec): min=6, max=5034, avg=20.99, stdev=169.93 00:20:36.844 clat (msec): min=17, max=162, avg=92.00, stdev=25.75 00:20:36.845 lat (msec): min=17, max=162, avg=92.02, stdev=25.74 00:20:36.845 clat percentiles (msec): 00:20:36.845 | 1.00th=[ 39], 5.00th=[ 50], 10.00th=[ 62], 20.00th=[ 71], 00:20:36.845 | 30.00th=[ 74], 40.00th=[ 84], 50.00th=[ 96], 60.00th=[ 104], 00:20:36.845 | 70.00th=[ 108], 80.00th=[ 114], 90.00th=[ 125], 95.00th=[ 136], 00:20:36.845 | 99.00th=[ 146], 99.50th=[ 150], 99.90th=[ 159], 99.95th=[ 163], 00:20:36.845 | 99.99th=[ 163] 00:20:36.845 bw ( KiB/s): min= 584, max= 976, per=4.17%, avg=690.25, stdev=114.97, samples=20 00:20:36.845 iops : min= 146, max= 244, avg=172.50, stdev=28.72, samples=20 00:20:36.845 lat (msec) : 20=0.80%, 50=4.82%, 100=53.64%, 250=40.73% 00:20:36.845 cpu : usr=43.31%, sys=2.92%, ctx=1218, majf=0, minf=9 00:20:36.845 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=81.7%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:36.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.845 complete : 0=0.0%, 4=87.7%, 8=11.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.845 issued rwts: total=1743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.845 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:36.845 filename0: (groupid=0, jobs=1): err= 0: pid=86793: Wed Nov 20 15:11:05 2024 00:20:36.845 read: IOPS=181, BW=725KiB/s (742kB/s)(7260KiB/10015msec) 00:20:36.845 slat (usec): min=3, max=8047, avg=50.60, stdev=471.50 00:20:36.845 clat (msec): min=31, max=157, avg=88.02, stdev=24.75 00:20:36.845 lat (msec): min=31, max=157, avg=88.07, stdev=24.74 00:20:36.845 clat percentiles (msec): 00:20:36.845 | 1.00th=[ 43], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 68], 00:20:36.845 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 85], 60.00th=[ 96], 00:20:36.845 | 70.00th=[ 105], 80.00th=[ 111], 90.00th=[ 121], 95.00th=[ 132], 00:20:36.845 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 159], 99.95th=[ 159], 00:20:36.845 | 99.99th=[ 159] 00:20:36.845 bw ( KiB/s): min= 608, max= 968, per=4.34%, avg=719.55, stdev=103.64, samples=20 00:20:36.845 iops : min= 152, max= 242, avg=179.85, stdev=25.91, samples=20 00:20:36.845 lat (msec) : 50=7.16%, 100=57.74%, 250=35.10% 00:20:36.845 cpu : usr=41.33%, sys=2.38%, ctx=1488, majf=0, minf=9 00:20:36.845 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.6%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:36.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.845 complete : 0=0.0%, 4=86.7%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.845 issued rwts: total=1815,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.845 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:36.845 filename0: (groupid=0, jobs=1): err= 0: pid=86794: Wed Nov 20 15:11:05 2024 00:20:36.845 read: IOPS=174, BW=698KiB/s (715kB/s)(7012KiB/10040msec) 00:20:36.845 slat (usec): min=7, max=10156, avg=23.75, stdev=256.34 00:20:36.845 clat (msec): min=16, max=183, avg=91.48, stdev=27.19 00:20:36.845 lat (msec): min=16, max=183, avg=91.50, stdev=27.19 00:20:36.845 clat percentiles (msec): 00:20:36.845 | 1.00th=[ 24], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 69], 00:20:36.845 | 30.00th=[ 73], 40.00th=[ 82], 50.00th=[ 95], 60.00th=[ 104], 00:20:36.845 | 70.00th=[ 109], 80.00th=[ 115], 90.00th=[ 127], 95.00th=[ 136], 00:20:36.845 | 99.00th=[ 150], 99.50th=[ 153], 99.90th=[ 167], 99.95th=[ 184], 00:20:36.845 | 99.99th=[ 184] 00:20:36.845 bw ( KiB/s): min= 512, max= 1103, per=4.19%, avg=694.05, stdev=142.08, samples=20 00:20:36.845 iops : min= 128, max= 275, avg=173.45, stdev=35.42, samples=20 00:20:36.845 lat (msec) : 20=0.91%, 50=6.56%, 100=49.74%, 250=42.78% 00:20:36.845 cpu : usr=43.09%, sys=2.82%, ctx=1377, majf=0, minf=0 00:20:36.845 IO depths : 1=0.1%, 2=0.3%, 4=0.9%, 8=82.3%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:36.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.845 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.845 issued rwts: total=1753,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.845 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:36.845 filename0: (groupid=0, jobs=1): err= 0: pid=86795: Wed Nov 20 15:11:05 2024 00:20:36.845 read: IOPS=167, BW=670KiB/s (686kB/s)(6728KiB/10040msec) 00:20:36.845 slat (usec): min=8, max=8024, avg=25.99, stdev=233.63 00:20:36.845 clat (msec): min=10, max=204, avg=95.30, stdev=27.59 00:20:36.845 lat (msec): min=10, max=204, avg=95.33, stdev=27.58 00:20:36.845 clat percentiles (msec): 00:20:36.845 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 62], 20.00th=[ 72], 00:20:36.845 | 30.00th=[ 75], 40.00th=[ 85], 50.00th=[ 96], 60.00th=[ 108], 00:20:36.845 | 70.00th=[ 109], 80.00th=[ 120], 90.00th=[ 132], 95.00th=[ 144], 00:20:36.845 | 99.00th=[ 167], 99.50th=[ 169], 99.90th=[ 205], 99.95th=[ 205], 00:20:36.845 | 99.99th=[ 205] 00:20:36.845 bw ( KiB/s): min= 512, max= 944, per=4.02%, avg=665.75, stdev=126.51, samples=20 00:20:36.845 iops : min= 128, max= 236, avg=166.40, stdev=31.61, samples=20 00:20:36.845 lat (msec) : 20=0.95%, 50=4.64%, 100=48.57%, 250=45.84% 00:20:36.845 cpu : usr=31.46%, sys=1.71%, ctx=874, majf=0, minf=9 00:20:36.845 IO depths : 1=0.1%, 2=1.1%, 4=4.6%, 8=78.4%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:36.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.845 complete : 0=0.0%, 4=88.6%, 8=10.4%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.845 issued rwts: total=1682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.845 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:36.845 filename0: (groupid=0, jobs=1): err= 0: pid=86796: Wed Nov 20 15:11:05 2024 00:20:36.845 read: IOPS=179, BW=717KiB/s (734kB/s)(7168KiB/10003msec) 00:20:36.845 slat (usec): min=3, max=8026, avg=18.09, stdev=189.36 00:20:36.845 clat (msec): min=3, max=160, avg=89.22, stdev=26.43 00:20:36.845 lat (msec): min=3, max=160, avg=89.24, stdev=26.43 00:20:36.845 clat percentiles (msec): 00:20:36.845 | 1.00th=[ 8], 5.00th=[ 47], 10.00th=[ 61], 20.00th=[ 70], 00:20:36.845 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 87], 60.00th=[ 96], 00:20:36.845 | 70.00th=[ 108], 80.00th=[ 110], 90.00th=[ 121], 95.00th=[ 132], 00:20:36.845 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 161], 99.95th=[ 161], 00:20:36.845 | 99.99th=[ 161] 00:20:36.845 bw ( KiB/s): min= 608, max= 1024, per=4.20%, avg=696.84, stdev=117.21, samples=19 00:20:36.845 iops : min= 152, max= 256, avg=174.21, stdev=29.30, samples=19 00:20:36.845 lat (msec) : 4=0.17%, 10=1.06%, 50=6.70%, 100=55.75%, 250=36.33% 00:20:36.845 cpu : usr=32.88%, sys=1.89%, ctx=1111, majf=0, minf=9 00:20:36.845 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.8%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:36.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.845 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.845 issued rwts: total=1792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.845 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:36.845 filename0: (groupid=0, jobs=1): err= 0: pid=86797: Wed Nov 20 15:11:05 2024 00:20:36.845 read: IOPS=167, BW=669KiB/s (685kB/s)(6696KiB/10012msec) 00:20:36.845 slat (usec): min=4, max=4033, avg=23.21, stdev=169.87 00:20:36.845 clat (msec): min=37, max=193, avg=95.54, stdev=28.62 00:20:36.845 lat (msec): min=37, max=193, avg=95.56, stdev=28.62 00:20:36.845 clat percentiles (msec): 00:20:36.845 | 1.00th=[ 43], 5.00th=[ 56], 10.00th=[ 64], 20.00th=[ 72], 00:20:36.845 | 30.00th=[ 75], 40.00th=[ 83], 50.00th=[ 96], 60.00th=[ 105], 00:20:36.845 | 70.00th=[ 110], 80.00th=[ 120], 90.00th=[ 132], 95.00th=[ 144], 00:20:36.845 | 99.00th=[ 190], 99.50th=[ 192], 99.90th=[ 194], 99.95th=[ 194], 00:20:36.845 | 99.99th=[ 194] 00:20:36.845 bw ( KiB/s): min= 384, max= 944, per=3.94%, avg=653.47, stdev=139.12, samples=19 00:20:36.845 iops : min= 96, max= 236, avg=163.32, stdev=34.79, samples=19 00:20:36.845 lat (msec) : 50=3.58%, 100=51.55%, 250=44.86% 00:20:36.845 cpu : usr=40.39%, sys=2.14%, ctx=1197, majf=0, minf=9 00:20:36.845 IO depths : 1=0.1%, 2=1.3%, 4=5.1%, 8=78.3%, 16=15.2%, 32=0.0%, >=64=0.0% 00:20:36.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.845 complete : 0=0.0%, 4=88.3%, 8=10.5%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.845 issued rwts: total=1674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.845 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:36.845 filename0: (groupid=0, jobs=1): err= 0: pid=86798: Wed Nov 20 15:11:05 2024 00:20:36.845 read: IOPS=174, BW=698KiB/s (715kB/s)(7008KiB/10033msec) 00:20:36.845 slat (usec): min=4, max=9025, avg=24.43, stdev=289.15 00:20:36.845 clat (msec): min=14, max=156, avg=91.50, stdev=25.63 00:20:36.845 lat (msec): min=14, max=156, avg=91.52, stdev=25.64 00:20:36.845 clat percentiles (msec): 00:20:36.845 | 1.00th=[ 36], 5.00th=[ 50], 10.00th=[ 61], 20.00th=[ 72], 00:20:36.845 | 30.00th=[ 73], 40.00th=[ 84], 50.00th=[ 96], 60.00th=[ 100], 00:20:36.845 | 70.00th=[ 108], 80.00th=[ 112], 90.00th=[ 121], 95.00th=[ 136], 00:20:36.845 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 157], 99.95th=[ 157], 00:20:36.845 | 99.99th=[ 157] 00:20:36.845 bw ( KiB/s): min= 536, max= 1016, per=4.19%, avg=694.30, stdev=123.17, samples=20 00:20:36.845 iops : min= 134, max= 254, avg=173.55, stdev=30.81, samples=20 00:20:36.845 lat (msec) : 20=0.80%, 50=4.51%, 100=55.94%, 250=38.76% 00:20:36.845 cpu : usr=33.78%, sys=2.13%, ctx=924, majf=0, minf=9 00:20:36.845 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=82.0%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:36.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.845 complete : 0=0.0%, 4=87.7%, 8=12.0%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.845 issued rwts: total=1752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.845 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:36.845 filename0: (groupid=0, jobs=1): err= 0: pid=86799: Wed Nov 20 15:11:05 2024 00:20:36.845 read: IOPS=168, BW=672KiB/s (688kB/s)(6744KiB/10033msec) 00:20:36.845 slat (usec): min=5, max=8034, avg=33.88, stdev=304.39 00:20:36.845 clat (msec): min=41, max=170, avg=95.06, stdev=24.85 00:20:36.845 lat (msec): min=41, max=170, avg=95.09, stdev=24.84 00:20:36.845 clat percentiles (msec): 00:20:36.845 | 1.00th=[ 48], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 72], 00:20:36.845 | 30.00th=[ 75], 40.00th=[ 85], 50.00th=[ 99], 60.00th=[ 105], 00:20:36.845 | 70.00th=[ 109], 80.00th=[ 116], 90.00th=[ 131], 95.00th=[ 138], 00:20:36.845 | 99.00th=[ 148], 99.50th=[ 155], 99.90th=[ 157], 99.95th=[ 171], 00:20:36.845 | 99.99th=[ 171] 00:20:36.845 bw ( KiB/s): min= 528, max= 912, per=4.03%, avg=667.90, stdev=105.43, samples=20 00:20:36.845 iops : min= 132, max= 228, avg=166.95, stdev=26.38, samples=20 00:20:36.846 lat (msec) : 50=2.55%, 100=50.83%, 250=46.62% 00:20:36.846 cpu : usr=40.04%, sys=2.66%, ctx=1776, majf=0, minf=9 00:20:36.846 IO depths : 1=0.1%, 2=1.2%, 4=4.9%, 8=77.9%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:36.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.846 complete : 0=0.0%, 4=88.8%, 8=10.1%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.846 issued rwts: total=1686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.846 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:36.846 filename1: (groupid=0, jobs=1): err= 0: pid=86800: Wed Nov 20 15:11:05 2024 00:20:36.846 read: IOPS=178, BW=712KiB/s (730kB/s)(7152KiB/10039msec) 00:20:36.846 slat (usec): min=6, max=8035, avg=26.63, stdev=284.29 00:20:36.846 clat (msec): min=18, max=163, avg=89.69, stdev=25.75 00:20:36.846 lat (msec): min=18, max=163, avg=89.72, stdev=25.75 00:20:36.846 clat percentiles (msec): 00:20:36.846 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 70], 00:20:36.846 | 30.00th=[ 73], 40.00th=[ 81], 50.00th=[ 88], 60.00th=[ 99], 00:20:36.846 | 70.00th=[ 107], 80.00th=[ 112], 90.00th=[ 122], 95.00th=[ 133], 00:20:36.846 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 157], 99.95th=[ 165], 00:20:36.846 | 99.99th=[ 165] 00:20:36.846 bw ( KiB/s): min= 584, max= 1000, per=4.28%, avg=708.20, stdev=117.78, samples=20 00:20:36.846 iops : min= 146, max= 250, avg=177.00, stdev=29.41, samples=20 00:20:36.846 lat (msec) : 20=0.78%, 50=5.76%, 100=55.03%, 250=38.42% 00:20:36.846 cpu : usr=40.06%, sys=2.40%, ctx=1125, majf=0, minf=9 00:20:36.846 IO depths : 1=0.1%, 2=0.3%, 4=1.5%, 8=82.2%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:36.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.846 complete : 0=0.0%, 4=87.4%, 8=12.2%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.846 issued rwts: total=1788,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.846 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:36.846 filename1: (groupid=0, jobs=1): err= 0: pid=86801: Wed Nov 20 15:11:05 2024 00:20:36.846 read: IOPS=178, BW=713KiB/s (730kB/s)(7168KiB/10056msec) 00:20:36.846 slat (usec): min=4, max=382, avg=14.25, stdev=10.75 00:20:36.846 clat (usec): min=1553, max=182217, avg=89548.66, stdev=35266.58 00:20:36.846 lat (usec): min=1563, max=182232, avg=89562.91, stdev=35266.22 00:20:36.846 clat percentiles (usec): 00:20:36.846 | 1.00th=[ 1647], 5.00th=[ 2868], 10.00th=[ 48497], 20.00th=[ 66323], 00:20:36.846 | 30.00th=[ 72877], 40.00th=[ 84411], 50.00th=[ 95945], 60.00th=[106431], 00:20:36.846 | 70.00th=[108528], 80.00th=[119014], 90.00th=[131597], 95.00th=[135267], 00:20:36.846 | 99.00th=[145753], 99.50th=[156238], 99.90th=[175113], 99.95th=[181404], 00:20:36.846 | 99.99th=[181404] 00:20:36.846 bw ( KiB/s): min= 536, max= 1920, per=4.28%, avg=709.95, stdev=298.82, samples=20 00:20:36.846 iops : min= 134, max= 480, avg=177.45, stdev=74.69, samples=20 00:20:36.846 lat (msec) : 2=3.91%, 4=3.24%, 20=0.89%, 50=2.62%, 100=44.48% 00:20:36.846 lat (msec) : 250=44.87% 00:20:36.846 cpu : usr=32.63%, sys=2.21%, ctx=907, majf=0, minf=0 00:20:36.846 IO depths : 1=0.5%, 2=1.5%, 4=4.0%, 8=78.0%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:36.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.846 complete : 0=0.0%, 4=88.9%, 8=10.2%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.846 issued rwts: total=1792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.846 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:36.846 filename1: (groupid=0, jobs=1): err= 0: pid=86802: Wed Nov 20 15:11:05 2024 00:20:36.846 read: IOPS=178, BW=713KiB/s (731kB/s)(7144KiB/10014msec) 00:20:36.846 slat (usec): min=7, max=8052, avg=23.40, stdev=190.31 00:20:36.846 clat (msec): min=25, max=159, avg=89.59, stdev=24.69 00:20:36.846 lat (msec): min=25, max=159, avg=89.62, stdev=24.68 00:20:36.846 clat percentiles (msec): 00:20:36.846 | 1.00th=[ 45], 5.00th=[ 49], 10.00th=[ 61], 20.00th=[ 71], 00:20:36.846 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 85], 60.00th=[ 97], 00:20:36.846 | 70.00th=[ 108], 80.00th=[ 110], 90.00th=[ 121], 95.00th=[ 132], 00:20:36.846 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 161], 99.95th=[ 161], 00:20:36.846 | 99.99th=[ 161] 00:20:36.846 bw ( KiB/s): min= 536, max= 976, per=4.27%, avg=708.00, stdev=106.62, samples=20 00:20:36.846 iops : min= 134, max= 244, avg=176.95, stdev=26.65, samples=20 00:20:36.846 lat (msec) : 50=6.27%, 100=57.39%, 250=36.34% 00:20:36.846 cpu : usr=31.25%, sys=1.88%, ctx=860, majf=0, minf=9 00:20:36.846 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.3%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:36.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.846 complete : 0=0.0%, 4=87.0%, 8=12.9%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.846 issued rwts: total=1786,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.846 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:36.846 filename1: (groupid=0, jobs=1): err= 0: pid=86803: Wed Nov 20 15:11:05 2024 00:20:36.846 read: IOPS=178, BW=715KiB/s (732kB/s)(7180KiB/10040msec) 00:20:36.846 slat (usec): min=7, max=8046, avg=34.54, stdev=389.96 00:20:36.846 clat (msec): min=10, max=167, avg=89.26, stdev=26.31 00:20:36.846 lat (msec): min=10, max=167, avg=89.29, stdev=26.30 00:20:36.846 clat percentiles (msec): 00:20:36.846 | 1.00th=[ 15], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 71], 00:20:36.846 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 85], 60.00th=[ 96], 00:20:36.846 | 70.00th=[ 108], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 132], 00:20:36.846 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 165], 99.95th=[ 169], 00:20:36.846 | 99.99th=[ 169] 00:20:36.846 bw ( KiB/s): min= 576, max= 1047, per=4.29%, avg=710.85, stdev=125.99, samples=20 00:20:36.846 iops : min= 144, max= 261, avg=177.65, stdev=31.40, samples=20 00:20:36.846 lat (msec) : 20=1.00%, 50=6.80%, 100=57.05%, 250=35.15% 00:20:36.846 cpu : usr=31.75%, sys=2.04%, ctx=870, majf=0, minf=9 00:20:36.846 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.3%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:36.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.846 complete : 0=0.0%, 4=87.4%, 8=12.3%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.846 issued rwts: total=1795,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.846 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:36.846 filename1: (groupid=0, jobs=1): err= 0: pid=86804: Wed Nov 20 15:11:05 2024 00:20:36.846 read: IOPS=177, BW=710KiB/s (727kB/s)(7112KiB/10022msec) 00:20:36.846 slat (usec): min=6, max=8026, avg=27.04, stdev=284.85 00:20:36.846 clat (msec): min=35, max=164, avg=90.07, stdev=24.82 00:20:36.846 lat (msec): min=35, max=164, avg=90.10, stdev=24.82 00:20:36.846 clat percentiles (msec): 00:20:36.846 | 1.00th=[ 43], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 70], 00:20:36.846 | 30.00th=[ 72], 40.00th=[ 82], 50.00th=[ 88], 60.00th=[ 99], 00:20:36.846 | 70.00th=[ 106], 80.00th=[ 111], 90.00th=[ 121], 95.00th=[ 134], 00:20:36.846 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 165], 99.95th=[ 165], 00:20:36.846 | 99.99th=[ 165] 00:20:36.846 bw ( KiB/s): min= 584, max= 1024, per=4.25%, avg=704.40, stdev=112.78, samples=20 00:20:36.846 iops : min= 146, max= 256, avg=176.05, stdev=28.18, samples=20 00:20:36.846 lat (msec) : 50=7.03%, 100=54.95%, 250=38.02% 00:20:36.846 cpu : usr=38.55%, sys=2.44%, ctx=1097, majf=0, minf=9 00:20:36.846 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=80.4%, 16=15.4%, 32=0.0%, >=64=0.0% 00:20:36.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.846 complete : 0=0.0%, 4=87.7%, 8=11.6%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.846 issued rwts: total=1778,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.846 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:36.846 filename1: (groupid=0, jobs=1): err= 0: pid=86805: Wed Nov 20 15:11:05 2024 00:20:36.846 read: IOPS=166, BW=667KiB/s (683kB/s)(6680KiB/10018msec) 00:20:36.846 slat (usec): min=4, max=8032, avg=21.73, stdev=206.98 00:20:36.846 clat (msec): min=36, max=192, avg=95.84, stdev=26.93 00:20:36.846 lat (msec): min=36, max=192, avg=95.86, stdev=26.93 00:20:36.846 clat percentiles (msec): 00:20:36.846 | 1.00th=[ 45], 5.00th=[ 58], 10.00th=[ 62], 20.00th=[ 72], 00:20:36.846 | 30.00th=[ 75], 40.00th=[ 86], 50.00th=[ 97], 60.00th=[ 107], 00:20:36.846 | 70.00th=[ 110], 80.00th=[ 118], 90.00th=[ 131], 95.00th=[ 140], 00:20:36.846 | 99.00th=[ 169], 99.50th=[ 171], 99.90th=[ 192], 99.95th=[ 192], 00:20:36.846 | 99.99th=[ 192] 00:20:36.846 bw ( KiB/s): min= 496, max= 920, per=4.01%, avg=664.15, stdev=129.93, samples=20 00:20:36.846 iops : min= 124, max= 230, avg=166.00, stdev=32.51, samples=20 00:20:36.846 lat (msec) : 50=3.77%, 100=49.76%, 250=46.47% 00:20:36.846 cpu : usr=38.46%, sys=2.26%, ctx=1122, majf=0, minf=9 00:20:36.846 IO depths : 1=0.1%, 2=2.2%, 4=8.6%, 8=74.4%, 16=14.8%, 32=0.0%, >=64=0.0% 00:20:36.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.846 complete : 0=0.0%, 4=89.4%, 8=8.8%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.846 issued rwts: total=1670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.846 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:36.846 filename1: (groupid=0, jobs=1): err= 0: pid=86806: Wed Nov 20 15:11:05 2024 00:20:36.846 read: IOPS=163, BW=653KiB/s (669kB/s)(6552KiB/10033msec) 00:20:36.846 slat (nsec): min=4602, max=42433, avg=15062.09, stdev=5517.53 00:20:36.846 clat (msec): min=31, max=172, avg=97.91, stdev=25.49 00:20:36.846 lat (msec): min=31, max=172, avg=97.92, stdev=25.49 00:20:36.846 clat percentiles (msec): 00:20:36.846 | 1.00th=[ 44], 5.00th=[ 56], 10.00th=[ 64], 20.00th=[ 72], 00:20:36.846 | 30.00th=[ 85], 40.00th=[ 96], 50.00th=[ 99], 60.00th=[ 108], 00:20:36.846 | 70.00th=[ 109], 80.00th=[ 121], 90.00th=[ 132], 95.00th=[ 144], 00:20:36.846 | 99.00th=[ 150], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 174], 00:20:36.846 | 99.99th=[ 174] 00:20:36.846 bw ( KiB/s): min= 528, max= 961, per=3.91%, avg=648.75, stdev=121.93, samples=20 00:20:36.846 iops : min= 132, max= 240, avg=162.15, stdev=30.46, samples=20 00:20:36.846 lat (msec) : 50=4.03%, 100=46.52%, 250=49.45% 00:20:36.846 cpu : usr=33.37%, sys=2.03%, ctx=1175, majf=0, minf=9 00:20:36.846 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=81.5%, 16=17.6%, 32=0.0%, >=64=0.0% 00:20:36.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.846 complete : 0=0.0%, 4=88.5%, 8=11.4%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.846 issued rwts: total=1638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.846 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:36.846 filename1: (groupid=0, jobs=1): err= 0: pid=86807: Wed Nov 20 15:11:05 2024 00:20:36.846 read: IOPS=169, BW=677KiB/s (693kB/s)(6784KiB/10028msec) 00:20:36.846 slat (usec): min=5, max=8029, avg=31.44, stdev=271.54 00:20:36.846 clat (msec): min=35, max=197, avg=94.44, stdev=26.68 00:20:36.846 lat (msec): min=35, max=197, avg=94.47, stdev=26.68 00:20:36.846 clat percentiles (msec): 00:20:36.846 | 1.00th=[ 45], 5.00th=[ 52], 10.00th=[ 63], 20.00th=[ 71], 00:20:36.847 | 30.00th=[ 74], 40.00th=[ 84], 50.00th=[ 97], 60.00th=[ 105], 00:20:36.847 | 70.00th=[ 110], 80.00th=[ 117], 90.00th=[ 130], 95.00th=[ 138], 00:20:36.847 | 99.00th=[ 171], 99.50th=[ 171], 99.90th=[ 197], 99.95th=[ 197], 00:20:36.847 | 99.99th=[ 197] 00:20:36.847 bw ( KiB/s): min= 496, max= 968, per=4.06%, avg=672.00, stdev=118.40, samples=20 00:20:36.847 iops : min= 124, max= 242, avg=168.00, stdev=29.60, samples=20 00:20:36.847 lat (msec) : 50=4.48%, 100=49.35%, 250=46.17% 00:20:36.847 cpu : usr=42.54%, sys=2.69%, ctx=1632, majf=0, minf=9 00:20:36.847 IO depths : 1=0.1%, 2=1.2%, 4=4.9%, 8=78.4%, 16=15.4%, 32=0.0%, >=64=0.0% 00:20:36.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.847 complete : 0=0.0%, 4=88.4%, 8=10.6%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.847 issued rwts: total=1696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.847 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:36.847 filename2: (groupid=0, jobs=1): err= 0: pid=86808: Wed Nov 20 15:11:05 2024 00:20:36.847 read: IOPS=173, BW=693KiB/s (710kB/s)(6952KiB/10032msec) 00:20:36.847 slat (usec): min=7, max=4039, avg=18.93, stdev=136.41 00:20:36.847 clat (msec): min=37, max=181, avg=92.21, stdev=25.46 00:20:36.847 lat (msec): min=37, max=181, avg=92.23, stdev=25.46 00:20:36.847 clat percentiles (msec): 00:20:36.847 | 1.00th=[ 44], 5.00th=[ 51], 10.00th=[ 62], 20.00th=[ 70], 00:20:36.847 | 30.00th=[ 73], 40.00th=[ 83], 50.00th=[ 94], 60.00th=[ 104], 00:20:36.847 | 70.00th=[ 108], 80.00th=[ 114], 90.00th=[ 125], 95.00th=[ 136], 00:20:36.847 | 99.00th=[ 148], 99.50th=[ 153], 99.90th=[ 180], 99.95th=[ 182], 00:20:36.847 | 99.99th=[ 182] 00:20:36.847 bw ( KiB/s): min= 582, max= 913, per=4.16%, avg=688.75, stdev=106.98, samples=20 00:20:36.847 iops : min= 145, max= 228, avg=172.15, stdev=26.74, samples=20 00:20:36.847 lat (msec) : 50=5.06%, 100=52.24%, 250=42.69% 00:20:36.847 cpu : usr=40.01%, sys=2.24%, ctx=1156, majf=0, minf=9 00:20:36.847 IO depths : 1=0.1%, 2=0.3%, 4=1.3%, 8=81.9%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:36.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.847 complete : 0=0.0%, 4=87.7%, 8=12.0%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.847 issued rwts: total=1738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.847 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:36.847 filename2: (groupid=0, jobs=1): err= 0: pid=86809: Wed Nov 20 15:11:05 2024 00:20:36.847 read: IOPS=171, BW=686KiB/s (703kB/s)(6864KiB/10004msec) 00:20:36.847 slat (usec): min=7, max=8042, avg=32.25, stdev=335.33 00:20:36.847 clat (msec): min=3, max=192, avg=93.09, stdev=29.41 00:20:36.847 lat (msec): min=3, max=192, avg=93.12, stdev=29.44 00:20:36.847 clat percentiles (msec): 00:20:36.847 | 1.00th=[ 5], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 72], 00:20:36.847 | 30.00th=[ 73], 40.00th=[ 85], 50.00th=[ 96], 60.00th=[ 103], 00:20:36.847 | 70.00th=[ 108], 80.00th=[ 120], 90.00th=[ 131], 95.00th=[ 136], 00:20:36.847 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 192], 99.95th=[ 192], 00:20:36.847 | 99.99th=[ 192] 00:20:36.847 bw ( KiB/s): min= 512, max= 968, per=3.98%, avg=658.11, stdev=117.93, samples=19 00:20:36.847 iops : min= 128, max= 242, avg=164.53, stdev=29.48, samples=19 00:20:36.847 lat (msec) : 4=0.17%, 10=2.04%, 50=4.66%, 100=52.97%, 250=40.15% 00:20:36.847 cpu : usr=31.18%, sys=2.00%, ctx=866, majf=0, minf=9 00:20:36.847 IO depths : 1=0.1%, 2=1.5%, 4=6.0%, 8=77.3%, 16=15.2%, 32=0.0%, >=64=0.0% 00:20:36.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.847 complete : 0=0.0%, 4=88.7%, 8=10.0%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.847 issued rwts: total=1716,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.847 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:36.847 filename2: (groupid=0, jobs=1): err= 0: pid=86810: Wed Nov 20 15:11:05 2024 00:20:36.847 read: IOPS=176, BW=705KiB/s (721kB/s)(7064KiB/10026msec) 00:20:36.847 slat (usec): min=3, max=8035, avg=33.37, stdev=356.57 00:20:36.847 clat (msec): min=39, max=154, avg=90.66, stdev=24.77 00:20:36.847 lat (msec): min=39, max=154, avg=90.69, stdev=24.78 00:20:36.847 clat percentiles (msec): 00:20:36.847 | 1.00th=[ 43], 5.00th=[ 51], 10.00th=[ 61], 20.00th=[ 70], 00:20:36.847 | 30.00th=[ 72], 40.00th=[ 82], 50.00th=[ 88], 60.00th=[ 100], 00:20:36.847 | 70.00th=[ 108], 80.00th=[ 112], 90.00th=[ 121], 95.00th=[ 133], 00:20:36.847 | 99.00th=[ 144], 99.50th=[ 150], 99.90th=[ 155], 99.95th=[ 155], 00:20:36.847 | 99.99th=[ 155] 00:20:36.847 bw ( KiB/s): min= 560, max= 992, per=4.23%, avg=700.05, stdev=105.13, samples=20 00:20:36.847 iops : min= 140, max= 248, avg=175.00, stdev=26.27, samples=20 00:20:36.847 lat (msec) : 50=5.04%, 100=55.89%, 250=39.07% 00:20:36.847 cpu : usr=40.86%, sys=2.36%, ctx=1057, majf=0, minf=9 00:20:36.847 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:36.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.847 complete : 0=0.0%, 4=87.7%, 8=11.7%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.847 issued rwts: total=1766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.847 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:36.847 filename2: (groupid=0, jobs=1): err= 0: pid=86811: Wed Nov 20 15:11:05 2024 00:20:36.847 read: IOPS=167, BW=669KiB/s (685kB/s)(6700KiB/10014msec) 00:20:36.847 slat (nsec): min=4250, max=55122, avg=14721.72, stdev=5548.66 00:20:36.847 clat (msec): min=35, max=181, avg=95.56, stdev=27.55 00:20:36.847 lat (msec): min=35, max=181, avg=95.57, stdev=27.55 00:20:36.847 clat percentiles (msec): 00:20:36.847 | 1.00th=[ 43], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 72], 00:20:36.847 | 30.00th=[ 73], 40.00th=[ 85], 50.00th=[ 99], 60.00th=[ 108], 00:20:36.847 | 70.00th=[ 110], 80.00th=[ 118], 90.00th=[ 132], 95.00th=[ 144], 00:20:36.847 | 99.00th=[ 165], 99.50th=[ 165], 99.90th=[ 182], 99.95th=[ 182], 00:20:36.847 | 99.99th=[ 182] 00:20:36.847 bw ( KiB/s): min= 504, max= 968, per=4.01%, avg=663.60, stdev=142.86, samples=20 00:20:36.847 iops : min= 126, max= 242, avg=165.85, stdev=35.74, samples=20 00:20:36.847 lat (msec) : 50=6.09%, 100=46.57%, 250=47.34% 00:20:36.847 cpu : usr=34.27%, sys=1.77%, ctx=948, majf=0, minf=9 00:20:36.847 IO depths : 1=0.1%, 2=2.1%, 4=8.3%, 8=74.9%, 16=14.6%, 32=0.0%, >=64=0.0% 00:20:36.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.847 complete : 0=0.0%, 4=89.2%, 8=9.0%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.847 issued rwts: total=1675,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.847 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:36.847 filename2: (groupid=0, jobs=1): err= 0: pid=86812: Wed Nov 20 15:11:05 2024 00:20:36.847 read: IOPS=182, BW=731KiB/s (749kB/s)(7316KiB/10004msec) 00:20:36.847 slat (usec): min=4, max=8029, avg=37.52, stdev=394.30 00:20:36.847 clat (msec): min=4, max=158, avg=87.35, stdev=25.84 00:20:36.847 lat (msec): min=4, max=158, avg=87.39, stdev=25.83 00:20:36.847 clat percentiles (msec): 00:20:36.847 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 68], 00:20:36.847 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 84], 60.00th=[ 96], 00:20:36.847 | 70.00th=[ 105], 80.00th=[ 110], 90.00th=[ 121], 95.00th=[ 131], 00:20:36.847 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 159], 99.95th=[ 159], 00:20:36.847 | 99.99th=[ 159] 00:20:36.847 bw ( KiB/s): min= 608, max= 1024, per=4.31%, avg=713.68, stdev=112.08, samples=19 00:20:36.847 iops : min= 152, max= 256, avg=178.42, stdev=28.02, samples=19 00:20:36.847 lat (msec) : 10=0.87%, 50=6.23%, 100=58.23%, 250=34.66% 00:20:36.847 cpu : usr=38.90%, sys=2.53%, ctx=1269, majf=0, minf=10 00:20:36.847 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.5%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:36.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.847 complete : 0=0.0%, 4=86.8%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.847 issued rwts: total=1829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.847 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:36.847 filename2: (groupid=0, jobs=1): err= 0: pid=86813: Wed Nov 20 15:11:05 2024 00:20:36.847 read: IOPS=165, BW=660KiB/s (676kB/s)(6620KiB/10025msec) 00:20:36.847 slat (usec): min=4, max=8028, avg=28.80, stdev=341.01 00:20:36.847 clat (msec): min=42, max=192, avg=96.75, stdev=25.20 00:20:36.847 lat (msec): min=42, max=192, avg=96.78, stdev=25.19 00:20:36.847 clat percentiles (msec): 00:20:36.847 | 1.00th=[ 48], 5.00th=[ 60], 10.00th=[ 64], 20.00th=[ 72], 00:20:36.847 | 30.00th=[ 79], 40.00th=[ 86], 50.00th=[ 97], 60.00th=[ 108], 00:20:36.847 | 70.00th=[ 109], 80.00th=[ 121], 90.00th=[ 132], 95.00th=[ 138], 00:20:36.847 | 99.00th=[ 157], 99.50th=[ 169], 99.90th=[ 192], 99.95th=[ 192], 00:20:36.847 | 99.99th=[ 192] 00:20:36.847 bw ( KiB/s): min= 512, max= 912, per=3.96%, avg=655.15, stdev=100.50, samples=20 00:20:36.847 iops : min= 128, max= 228, avg=163.75, stdev=25.11, samples=20 00:20:36.847 lat (msec) : 50=1.81%, 100=50.69%, 250=47.49% 00:20:36.847 cpu : usr=32.04%, sys=2.14%, ctx=883, majf=0, minf=9 00:20:36.847 IO depths : 1=0.1%, 2=0.9%, 4=3.6%, 8=79.3%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:36.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.847 complete : 0=0.0%, 4=88.5%, 8=10.7%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.847 issued rwts: total=1655,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.847 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:36.847 filename2: (groupid=0, jobs=1): err= 0: pid=86814: Wed Nov 20 15:11:05 2024 00:20:36.847 read: IOPS=173, BW=692KiB/s (709kB/s)(6936KiB/10023msec) 00:20:36.847 slat (usec): min=3, max=5874, avg=29.46, stdev=245.30 00:20:36.847 clat (msec): min=37, max=191, avg=92.33, stdev=26.35 00:20:36.847 lat (msec): min=37, max=191, avg=92.36, stdev=26.36 00:20:36.847 clat percentiles (msec): 00:20:36.847 | 1.00th=[ 44], 5.00th=[ 55], 10.00th=[ 61], 20.00th=[ 71], 00:20:36.847 | 30.00th=[ 73], 40.00th=[ 82], 50.00th=[ 91], 60.00th=[ 102], 00:20:36.847 | 70.00th=[ 107], 80.00th=[ 113], 90.00th=[ 123], 95.00th=[ 134], 00:20:36.847 | 99.00th=[ 176], 99.50th=[ 192], 99.90th=[ 192], 99.95th=[ 192], 00:20:36.847 | 99.99th=[ 192] 00:20:36.847 bw ( KiB/s): min= 512, max= 960, per=4.14%, avg=686.80, stdev=114.78, samples=20 00:20:36.847 iops : min= 128, max= 240, avg=171.65, stdev=28.67, samples=20 00:20:36.847 lat (msec) : 50=3.06%, 100=54.50%, 250=42.45% 00:20:36.847 cpu : usr=42.82%, sys=2.70%, ctx=1290, majf=0, minf=10 00:20:36.848 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=79.6%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:36.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.848 complete : 0=0.0%, 4=88.0%, 8=11.2%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.848 issued rwts: total=1734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:36.848 filename2: (groupid=0, jobs=1): err= 0: pid=86815: Wed Nov 20 15:11:05 2024 00:20:36.848 read: IOPS=166, BW=665KiB/s (680kB/s)(6648KiB/10004msec) 00:20:36.848 slat (usec): min=4, max=8025, avg=31.29, stdev=354.07 00:20:36.848 clat (msec): min=3, max=192, avg=96.17, stdev=29.82 00:20:36.848 lat (msec): min=3, max=192, avg=96.20, stdev=29.83 00:20:36.848 clat percentiles (msec): 00:20:36.848 | 1.00th=[ 8], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 72], 00:20:36.848 | 30.00th=[ 80], 40.00th=[ 93], 50.00th=[ 99], 60.00th=[ 108], 00:20:36.848 | 70.00th=[ 109], 80.00th=[ 121], 90.00th=[ 132], 95.00th=[ 144], 00:20:36.848 | 99.00th=[ 169], 99.50th=[ 169], 99.90th=[ 192], 99.95th=[ 192], 00:20:36.848 | 99.99th=[ 192] 00:20:36.848 bw ( KiB/s): min= 512, max= 1000, per=3.88%, avg=642.16, stdev=139.24, samples=19 00:20:36.848 iops : min= 128, max= 250, avg=160.53, stdev=34.80, samples=19 00:20:36.848 lat (msec) : 4=0.18%, 10=1.14%, 50=5.48%, 100=44.89%, 250=48.32% 00:20:36.848 cpu : usr=32.71%, sys=2.03%, ctx=918, majf=0, minf=9 00:20:36.848 IO depths : 1=0.1%, 2=2.7%, 4=10.8%, 8=72.1%, 16=14.4%, 32=0.0%, >=64=0.0% 00:20:36.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.848 complete : 0=0.0%, 4=90.0%, 8=7.6%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:36.848 issued rwts: total=1662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:36.848 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:36.848 00:20:36.848 Run status group 0 (all jobs): 00:20:36.848 READ: bw=16.2MiB/s (16.9MB/s), 653KiB/s-731KiB/s (669kB/s-749kB/s), io=163MiB (170MB), run=10003-10056msec 00:20:36.848 15:11:05 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:36.848 15:11:05 -- target/dif.sh@43 -- # local sub 00:20:36.848 15:11:05 -- target/dif.sh@45 -- # for sub in "$@" 00:20:36.848 15:11:05 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:36.848 15:11:05 -- target/dif.sh@36 -- # local sub_id=0 00:20:36.848 15:11:05 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:36.848 15:11:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.848 15:11:05 -- common/autotest_common.sh@10 -- # set +x 00:20:36.848 15:11:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.848 15:11:05 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:36.848 15:11:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.848 15:11:05 -- common/autotest_common.sh@10 -- # set +x 00:20:36.848 15:11:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.848 15:11:05 -- target/dif.sh@45 -- # for sub in "$@" 00:20:36.848 15:11:05 -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:36.848 15:11:05 -- target/dif.sh@36 -- # local sub_id=1 00:20:36.848 15:11:05 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:36.848 15:11:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.848 15:11:05 -- common/autotest_common.sh@10 -- # set +x 00:20:36.848 15:11:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.848 15:11:05 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:36.848 15:11:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.848 15:11:05 -- common/autotest_common.sh@10 -- # set +x 00:20:36.848 15:11:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.848 15:11:05 -- target/dif.sh@45 -- # for sub in "$@" 00:20:36.848 15:11:05 -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:36.848 15:11:05 -- target/dif.sh@36 -- # local sub_id=2 00:20:36.848 15:11:05 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:36.848 15:11:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.848 15:11:05 -- common/autotest_common.sh@10 -- # set +x 00:20:36.848 15:11:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.848 15:11:05 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:36.848 15:11:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.848 15:11:05 -- common/autotest_common.sh@10 -- # set +x 00:20:36.848 15:11:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.848 15:11:05 -- target/dif.sh@115 -- # NULL_DIF=1 00:20:36.848 15:11:05 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:36.848 15:11:05 -- target/dif.sh@115 -- # numjobs=2 00:20:36.848 15:11:05 -- target/dif.sh@115 -- # iodepth=8 00:20:36.848 15:11:05 -- target/dif.sh@115 -- # runtime=5 00:20:36.848 15:11:05 -- target/dif.sh@115 -- # files=1 00:20:36.848 15:11:05 -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:36.848 15:11:05 -- target/dif.sh@28 -- # local sub 00:20:36.848 15:11:05 -- target/dif.sh@30 -- # for sub in "$@" 00:20:36.848 15:11:05 -- target/dif.sh@31 -- # create_subsystem 0 00:20:36.848 15:11:05 -- target/dif.sh@18 -- # local sub_id=0 00:20:36.848 15:11:05 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:36.848 15:11:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.848 15:11:05 -- common/autotest_common.sh@10 -- # set +x 00:20:36.848 bdev_null0 00:20:36.848 15:11:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.848 15:11:05 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:36.848 15:11:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.848 15:11:05 -- common/autotest_common.sh@10 -- # set +x 00:20:36.848 15:11:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.848 15:11:05 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:36.848 15:11:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.848 15:11:05 -- common/autotest_common.sh@10 -- # set +x 00:20:36.848 15:11:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.848 15:11:05 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:36.848 15:11:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.848 15:11:05 -- common/autotest_common.sh@10 -- # set +x 00:20:36.848 [2024-11-20 15:11:05.816838] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.848 15:11:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.848 15:11:05 -- target/dif.sh@30 -- # for sub in "$@" 00:20:36.848 15:11:05 -- target/dif.sh@31 -- # create_subsystem 1 00:20:36.848 15:11:05 -- target/dif.sh@18 -- # local sub_id=1 00:20:36.848 15:11:05 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:36.848 15:11:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.848 15:11:05 -- common/autotest_common.sh@10 -- # set +x 00:20:36.848 bdev_null1 00:20:36.848 15:11:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.848 15:11:05 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:36.848 15:11:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.848 15:11:05 -- common/autotest_common.sh@10 -- # set +x 00:20:36.848 15:11:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.848 15:11:05 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:36.848 15:11:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.848 15:11:05 -- common/autotest_common.sh@10 -- # set +x 00:20:36.848 15:11:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.848 15:11:05 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:36.848 15:11:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.848 15:11:05 -- common/autotest_common.sh@10 -- # set +x 00:20:36.848 15:11:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.848 15:11:05 -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:36.848 15:11:05 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:36.848 15:11:05 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:36.848 15:11:05 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:36.848 15:11:05 -- target/dif.sh@82 -- # gen_fio_conf 00:20:36.848 15:11:05 -- nvmf/common.sh@520 -- # config=() 00:20:36.848 15:11:05 -- target/dif.sh@54 -- # local file 00:20:36.848 15:11:05 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:36.848 15:11:05 -- nvmf/common.sh@520 -- # local subsystem config 00:20:36.848 15:11:05 -- target/dif.sh@56 -- # cat 00:20:36.848 15:11:05 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:36.848 15:11:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:36.848 15:11:05 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:36.848 15:11:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:36.848 { 00:20:36.848 "params": { 00:20:36.848 "name": "Nvme$subsystem", 00:20:36.848 "trtype": "$TEST_TRANSPORT", 00:20:36.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.848 "adrfam": "ipv4", 00:20:36.848 "trsvcid": "$NVMF_PORT", 00:20:36.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.848 "hdgst": ${hdgst:-false}, 00:20:36.848 "ddgst": ${ddgst:-false} 00:20:36.848 }, 00:20:36.848 "method": "bdev_nvme_attach_controller" 00:20:36.848 } 00:20:36.848 EOF 00:20:36.848 )") 00:20:36.848 15:11:05 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:36.848 15:11:05 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:36.848 15:11:05 -- common/autotest_common.sh@1330 -- # shift 00:20:36.848 15:11:05 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:36.848 15:11:05 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:36.848 15:11:05 -- nvmf/common.sh@542 -- # cat 00:20:36.848 15:11:05 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:36.848 15:11:05 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:36.848 15:11:05 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:36.848 15:11:05 -- target/dif.sh@72 -- # (( file <= files )) 00:20:36.848 15:11:05 -- target/dif.sh@73 -- # cat 00:20:36.848 15:11:05 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:36.848 15:11:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:36.848 15:11:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:36.848 { 00:20:36.848 "params": { 00:20:36.848 "name": "Nvme$subsystem", 00:20:36.848 "trtype": "$TEST_TRANSPORT", 00:20:36.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.848 "adrfam": "ipv4", 00:20:36.849 "trsvcid": "$NVMF_PORT", 00:20:36.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.849 "hdgst": ${hdgst:-false}, 00:20:36.849 "ddgst": ${ddgst:-false} 00:20:36.849 }, 00:20:36.849 "method": "bdev_nvme_attach_controller" 00:20:36.849 } 00:20:36.849 EOF 00:20:36.849 )") 00:20:36.849 15:11:05 -- nvmf/common.sh@542 -- # cat 00:20:36.849 15:11:05 -- target/dif.sh@72 -- # (( file++ )) 00:20:36.849 15:11:05 -- target/dif.sh@72 -- # (( file <= files )) 00:20:36.849 15:11:05 -- nvmf/common.sh@544 -- # jq . 00:20:36.849 15:11:05 -- nvmf/common.sh@545 -- # IFS=, 00:20:36.849 15:11:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:36.849 "params": { 00:20:36.849 "name": "Nvme0", 00:20:36.849 "trtype": "tcp", 00:20:36.849 "traddr": "10.0.0.2", 00:20:36.849 "adrfam": "ipv4", 00:20:36.849 "trsvcid": "4420", 00:20:36.849 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:36.849 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:36.849 "hdgst": false, 00:20:36.849 "ddgst": false 00:20:36.849 }, 00:20:36.849 "method": "bdev_nvme_attach_controller" 00:20:36.849 },{ 00:20:36.849 "params": { 00:20:36.849 "name": "Nvme1", 00:20:36.849 "trtype": "tcp", 00:20:36.849 "traddr": "10.0.0.2", 00:20:36.849 "adrfam": "ipv4", 00:20:36.849 "trsvcid": "4420", 00:20:36.849 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.849 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.849 "hdgst": false, 00:20:36.849 "ddgst": false 00:20:36.849 }, 00:20:36.849 "method": "bdev_nvme_attach_controller" 00:20:36.849 }' 00:20:36.849 15:11:05 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:36.849 15:11:05 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:36.849 15:11:05 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:36.849 15:11:05 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:36.849 15:11:05 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:36.849 15:11:05 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:36.849 15:11:05 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:36.849 15:11:05 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:36.849 15:11:05 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:36.849 15:11:05 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:36.849 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:36.849 ... 00:20:36.849 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:36.849 ... 00:20:36.849 fio-3.35 00:20:36.849 Starting 4 threads 00:20:36.849 [2024-11-20 15:11:06.453167] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:36.849 [2024-11-20 15:11:06.453230] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:41.036 00:20:41.036 filename0: (groupid=0, jobs=1): err= 0: pid=86967: Wed Nov 20 15:11:11 2024 00:20:41.036 read: IOPS=1897, BW=14.8MiB/s (15.5MB/s)(74.2MiB/5003msec) 00:20:41.036 slat (nsec): min=7064, max=46388, avg=13822.02, stdev=4759.37 00:20:41.036 clat (usec): min=742, max=9686, avg=4171.89, stdev=981.45 00:20:41.036 lat (usec): min=752, max=9696, avg=4185.71, stdev=980.74 00:20:41.036 clat percentiles (usec): 00:20:41.036 | 1.00th=[ 1582], 5.00th=[ 2180], 10.00th=[ 2606], 20.00th=[ 3687], 00:20:41.036 | 30.00th=[ 3785], 40.00th=[ 3884], 50.00th=[ 4555], 60.00th=[ 4817], 00:20:41.036 | 70.00th=[ 4817], 80.00th=[ 4883], 90.00th=[ 4948], 95.00th=[ 5145], 00:20:41.036 | 99.00th=[ 6325], 99.50th=[ 6783], 99.90th=[ 7504], 99.95th=[ 8455], 00:20:41.036 | 99.99th=[ 9634] 00:20:41.036 bw ( KiB/s): min=12800, max=17504, per=22.55%, avg=14839.11, stdev=1784.58, samples=9 00:20:41.036 iops : min= 1600, max= 2186, avg=1854.67, stdev=222.70, samples=9 00:20:41.036 lat (usec) : 750=0.01%, 1000=0.08% 00:20:41.036 lat (msec) : 2=2.64%, 4=40.14%, 10=57.12% 00:20:41.036 cpu : usr=92.12%, sys=6.90%, ctx=7, majf=0, minf=0 00:20:41.036 IO depths : 1=0.1%, 2=11.4%, 4=57.9%, 8=30.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:41.036 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.036 complete : 0=0.0%, 4=95.7%, 8=4.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.036 issued rwts: total=9494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.036 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:41.036 filename0: (groupid=0, jobs=1): err= 0: pid=86968: Wed Nov 20 15:11:11 2024 00:20:41.036 read: IOPS=2125, BW=16.6MiB/s (17.4MB/s)(83.1MiB/5002msec) 00:20:41.036 slat (nsec): min=5357, max=60237, avg=16513.62, stdev=4378.09 00:20:41.036 clat (usec): min=977, max=9644, avg=3720.31, stdev=1036.27 00:20:41.036 lat (usec): min=988, max=9661, avg=3736.83, stdev=1035.86 00:20:41.036 clat percentiles (usec): 00:20:41.036 | 1.00th=[ 1598], 5.00th=[ 1975], 10.00th=[ 2114], 20.00th=[ 2638], 00:20:41.037 | 30.00th=[ 3064], 40.00th=[ 3720], 50.00th=[ 3785], 60.00th=[ 4015], 00:20:41.037 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 4817], 95.00th=[ 5080], 00:20:41.037 | 99.00th=[ 5735], 99.50th=[ 6521], 99.90th=[ 6980], 99.95th=[ 8225], 00:20:41.037 | 99.99th=[ 9241] 00:20:41.037 bw ( KiB/s): min=14861, max=18128, per=25.98%, avg=17096.56, stdev=1050.51, samples=9 00:20:41.037 iops : min= 1857, max= 2266, avg=2137.00, stdev=131.48, samples=9 00:20:41.037 lat (usec) : 1000=0.02% 00:20:41.037 lat (msec) : 2=5.47%, 4=54.25%, 10=40.27% 00:20:41.037 cpu : usr=91.62%, sys=7.28%, ctx=5, majf=0, minf=0 00:20:41.037 IO depths : 1=0.2%, 2=3.2%, 4=62.3%, 8=34.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:41.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.037 complete : 0=0.0%, 4=98.8%, 8=1.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.037 issued rwts: total=10631,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.037 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:41.037 filename1: (groupid=0, jobs=1): err= 0: pid=86969: Wed Nov 20 15:11:11 2024 00:20:41.037 read: IOPS=2078, BW=16.2MiB/s (17.0MB/s)(81.2MiB/5001msec) 00:20:41.037 slat (usec): min=6, max=249, avg=11.91, stdev= 5.08 00:20:41.037 clat (usec): min=1206, max=9733, avg=3815.50, stdev=1063.71 00:20:41.037 lat (usec): min=1214, max=9741, avg=3827.41, stdev=1063.12 00:20:41.037 clat percentiles (usec): 00:20:41.037 | 1.00th=[ 1598], 5.00th=[ 2024], 10.00th=[ 2212], 20.00th=[ 2737], 00:20:41.037 | 30.00th=[ 3097], 40.00th=[ 3785], 50.00th=[ 3851], 60.00th=[ 4293], 00:20:41.037 | 70.00th=[ 4686], 80.00th=[ 4817], 90.00th=[ 4883], 95.00th=[ 5080], 00:20:41.037 | 99.00th=[ 6259], 99.50th=[ 6718], 99.90th=[ 8848], 99.95th=[ 8848], 00:20:41.037 | 99.99th=[ 9241] 00:20:41.037 bw ( KiB/s): min=14352, max=18064, per=25.36%, avg=16686.56, stdev=1443.99, samples=9 00:20:41.037 iops : min= 1794, max= 2258, avg=2085.78, stdev=180.46, samples=9 00:20:41.037 lat (msec) : 2=3.47%, 4=53.16%, 10=43.36% 00:20:41.037 cpu : usr=91.18%, sys=7.58%, ctx=55, majf=0, minf=0 00:20:41.037 IO depths : 1=0.1%, 2=4.6%, 4=61.6%, 8=33.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:41.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.037 complete : 0=0.0%, 4=98.3%, 8=1.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.037 issued rwts: total=10396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.037 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:41.037 filename1: (groupid=0, jobs=1): err= 0: pid=86970: Wed Nov 20 15:11:11 2024 00:20:41.037 read: IOPS=2125, BW=16.6MiB/s (17.4MB/s)(83.1MiB/5001msec) 00:20:41.037 slat (nsec): min=7874, max=74305, avg=16164.44, stdev=4118.26 00:20:41.037 clat (usec): min=1242, max=9637, avg=3722.05, stdev=1035.73 00:20:41.037 lat (usec): min=1256, max=9654, avg=3738.21, stdev=1035.73 00:20:41.037 clat percentiles (usec): 00:20:41.037 | 1.00th=[ 1598], 5.00th=[ 1975], 10.00th=[ 2114], 20.00th=[ 2638], 00:20:41.037 | 30.00th=[ 3097], 40.00th=[ 3752], 50.00th=[ 3818], 60.00th=[ 4015], 00:20:41.037 | 70.00th=[ 4555], 80.00th=[ 4686], 90.00th=[ 4817], 95.00th=[ 5080], 00:20:41.037 | 99.00th=[ 5735], 99.50th=[ 6521], 99.90th=[ 6980], 99.95th=[ 8225], 00:20:41.037 | 99.99th=[ 9241] 00:20:41.037 bw ( KiB/s): min=14832, max=18080, per=25.98%, avg=17093.33, stdev=1033.30, samples=9 00:20:41.037 iops : min= 1854, max= 2260, avg=2136.67, stdev=129.16, samples=9 00:20:41.037 lat (msec) : 2=5.53%, 4=54.28%, 10=40.18% 00:20:41.037 cpu : usr=92.08%, sys=6.86%, ctx=73, majf=0, minf=0 00:20:41.037 IO depths : 1=0.1%, 2=3.2%, 4=62.3%, 8=34.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:41.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.037 complete : 0=0.0%, 4=98.8%, 8=1.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.037 issued rwts: total=10631,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.037 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:41.037 00:20:41.037 Run status group 0 (all jobs): 00:20:41.037 READ: bw=64.3MiB/s (67.4MB/s), 14.8MiB/s-16.6MiB/s (15.5MB/s-17.4MB/s), io=322MiB (337MB), run=5001-5003msec 00:20:41.037 15:11:11 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:41.037 15:11:11 -- target/dif.sh@43 -- # local sub 00:20:41.037 15:11:11 -- target/dif.sh@45 -- # for sub in "$@" 00:20:41.037 15:11:11 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:41.037 15:11:11 -- target/dif.sh@36 -- # local sub_id=0 00:20:41.037 15:11:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:41.037 15:11:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.037 15:11:11 -- common/autotest_common.sh@10 -- # set +x 00:20:41.037 15:11:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.037 15:11:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:41.037 15:11:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.037 15:11:11 -- common/autotest_common.sh@10 -- # set +x 00:20:41.037 15:11:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.037 15:11:11 -- target/dif.sh@45 -- # for sub in "$@" 00:20:41.037 15:11:11 -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:41.037 15:11:11 -- target/dif.sh@36 -- # local sub_id=1 00:20:41.037 15:11:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:41.037 15:11:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.037 15:11:11 -- common/autotest_common.sh@10 -- # set +x 00:20:41.037 15:11:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.037 15:11:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:41.037 15:11:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.037 15:11:11 -- common/autotest_common.sh@10 -- # set +x 00:20:41.037 ************************************ 00:20:41.037 END TEST fio_dif_rand_params 00:20:41.037 ************************************ 00:20:41.037 15:11:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.037 00:20:41.037 real 0m23.025s 00:20:41.037 user 2m3.069s 00:20:41.037 sys 0m8.779s 00:20:41.037 15:11:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:41.037 15:11:11 -- common/autotest_common.sh@10 -- # set +x 00:20:41.037 15:11:11 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:41.037 15:11:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:41.037 15:11:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:41.037 15:11:11 -- common/autotest_common.sh@10 -- # set +x 00:20:41.037 ************************************ 00:20:41.037 START TEST fio_dif_digest 00:20:41.037 ************************************ 00:20:41.037 15:11:11 -- common/autotest_common.sh@1114 -- # fio_dif_digest 00:20:41.037 15:11:11 -- target/dif.sh@123 -- # local NULL_DIF 00:20:41.037 15:11:11 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:41.037 15:11:11 -- target/dif.sh@125 -- # local hdgst ddgst 00:20:41.037 15:11:11 -- target/dif.sh@127 -- # NULL_DIF=3 00:20:41.037 15:11:11 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:41.037 15:11:11 -- target/dif.sh@127 -- # numjobs=3 00:20:41.037 15:11:11 -- target/dif.sh@127 -- # iodepth=3 00:20:41.037 15:11:11 -- target/dif.sh@127 -- # runtime=10 00:20:41.037 15:11:11 -- target/dif.sh@128 -- # hdgst=true 00:20:41.037 15:11:11 -- target/dif.sh@128 -- # ddgst=true 00:20:41.037 15:11:11 -- target/dif.sh@130 -- # create_subsystems 0 00:20:41.037 15:11:11 -- target/dif.sh@28 -- # local sub 00:20:41.037 15:11:11 -- target/dif.sh@30 -- # for sub in "$@" 00:20:41.037 15:11:11 -- target/dif.sh@31 -- # create_subsystem 0 00:20:41.037 15:11:11 -- target/dif.sh@18 -- # local sub_id=0 00:20:41.037 15:11:11 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:41.037 15:11:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.037 15:11:11 -- common/autotest_common.sh@10 -- # set +x 00:20:41.037 bdev_null0 00:20:41.037 15:11:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.037 15:11:11 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:41.037 15:11:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.037 15:11:11 -- common/autotest_common.sh@10 -- # set +x 00:20:41.037 15:11:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.037 15:11:11 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:41.037 15:11:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.037 15:11:11 -- common/autotest_common.sh@10 -- # set +x 00:20:41.037 15:11:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.037 15:11:11 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:41.037 15:11:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.037 15:11:11 -- common/autotest_common.sh@10 -- # set +x 00:20:41.037 [2024-11-20 15:11:11.833935] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.296 15:11:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.296 15:11:11 -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:41.296 15:11:11 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:41.296 15:11:11 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:41.296 15:11:11 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:41.296 15:11:11 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:41.296 15:11:11 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:20:41.296 15:11:11 -- nvmf/common.sh@520 -- # config=() 00:20:41.296 15:11:11 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:41.296 15:11:11 -- nvmf/common.sh@520 -- # local subsystem config 00:20:41.296 15:11:11 -- common/autotest_common.sh@1328 -- # local sanitizers 00:20:41.296 15:11:11 -- target/dif.sh@82 -- # gen_fio_conf 00:20:41.296 15:11:11 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:41.296 15:11:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:41.296 15:11:11 -- common/autotest_common.sh@1330 -- # shift 00:20:41.296 15:11:11 -- target/dif.sh@54 -- # local file 00:20:41.296 15:11:11 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:20:41.296 15:11:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:41.296 { 00:20:41.296 "params": { 00:20:41.296 "name": "Nvme$subsystem", 00:20:41.296 "trtype": "$TEST_TRANSPORT", 00:20:41.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.296 "adrfam": "ipv4", 00:20:41.296 "trsvcid": "$NVMF_PORT", 00:20:41.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.296 "hdgst": ${hdgst:-false}, 00:20:41.296 "ddgst": ${ddgst:-false} 00:20:41.296 }, 00:20:41.296 "method": "bdev_nvme_attach_controller" 00:20:41.296 } 00:20:41.296 EOF 00:20:41.296 )") 00:20:41.296 15:11:11 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:41.296 15:11:11 -- target/dif.sh@56 -- # cat 00:20:41.296 15:11:11 -- nvmf/common.sh@542 -- # cat 00:20:41.296 15:11:11 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:41.296 15:11:11 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:41.296 15:11:11 -- common/autotest_common.sh@1334 -- # grep libasan 00:20:41.296 15:11:11 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:41.296 15:11:11 -- target/dif.sh@72 -- # (( file <= files )) 00:20:41.296 15:11:11 -- nvmf/common.sh@544 -- # jq . 00:20:41.296 15:11:11 -- nvmf/common.sh@545 -- # IFS=, 00:20:41.296 15:11:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:41.296 "params": { 00:20:41.296 "name": "Nvme0", 00:20:41.296 "trtype": "tcp", 00:20:41.296 "traddr": "10.0.0.2", 00:20:41.296 "adrfam": "ipv4", 00:20:41.296 "trsvcid": "4420", 00:20:41.296 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:41.296 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:41.296 "hdgst": true, 00:20:41.296 "ddgst": true 00:20:41.296 }, 00:20:41.296 "method": "bdev_nvme_attach_controller" 00:20:41.296 }' 00:20:41.296 15:11:11 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:41.296 15:11:11 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:41.296 15:11:11 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:20:41.296 15:11:11 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:41.296 15:11:11 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:20:41.296 15:11:11 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:20:41.296 15:11:11 -- common/autotest_common.sh@1334 -- # asan_lib= 00:20:41.297 15:11:11 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:20:41.297 15:11:11 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:41.297 15:11:11 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:41.297 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:41.297 ... 00:20:41.297 fio-3.35 00:20:41.297 Starting 3 threads 00:20:41.556 [2024-11-20 15:11:12.342860] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:41.556 [2024-11-20 15:11:12.342945] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:53.812 00:20:53.813 filename0: (groupid=0, jobs=1): err= 0: pid=87072: Wed Nov 20 15:11:22 2024 00:20:53.813 read: IOPS=215, BW=27.0MiB/s (28.3MB/s)(270MiB/10010msec) 00:20:53.813 slat (nsec): min=6991, max=49591, avg=17785.95, stdev=6378.64 00:20:53.813 clat (usec): min=13135, max=23168, avg=13860.64, stdev=1054.91 00:20:53.813 lat (usec): min=13143, max=23208, avg=13878.43, stdev=1054.97 00:20:53.813 clat percentiles (usec): 00:20:53.813 | 1.00th=[13173], 5.00th=[13304], 10.00th=[13304], 20.00th=[13435], 00:20:53.813 | 30.00th=[13435], 40.00th=[13435], 50.00th=[13566], 60.00th=[13566], 00:20:53.813 | 70.00th=[13566], 80.00th=[13698], 90.00th=[14877], 95.00th=[16581], 00:20:53.813 | 99.00th=[17695], 99.50th=[18220], 99.90th=[23200], 99.95th=[23200], 00:20:53.813 | 99.99th=[23200] 00:20:53.813 bw ( KiB/s): min=23040, max=28416, per=33.31%, avg=27601.10, stdev=1522.36, samples=20 00:20:53.813 iops : min= 180, max= 222, avg=215.60, stdev=11.89, samples=20 00:20:53.813 lat (msec) : 20=99.86%, 50=0.14% 00:20:53.813 cpu : usr=92.15%, sys=7.20%, ctx=19, majf=0, minf=9 00:20:53.813 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:53.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.813 issued rwts: total=2160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.813 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:53.813 filename0: (groupid=0, jobs=1): err= 0: pid=87073: Wed Nov 20 15:11:22 2024 00:20:53.813 read: IOPS=215, BW=27.0MiB/s (28.3MB/s)(270MiB/10008msec) 00:20:53.813 slat (nsec): min=8082, max=64316, avg=18379.76, stdev=6034.84 00:20:53.813 clat (usec): min=13179, max=23089, avg=13855.70, stdev=1047.08 00:20:53.813 lat (usec): min=13193, max=23107, avg=13874.08, stdev=1047.08 00:20:53.813 clat percentiles (usec): 00:20:53.813 | 1.00th=[13173], 5.00th=[13304], 10.00th=[13304], 20.00th=[13435], 00:20:53.813 | 30.00th=[13435], 40.00th=[13435], 50.00th=[13566], 60.00th=[13566], 00:20:53.813 | 70.00th=[13566], 80.00th=[13698], 90.00th=[14877], 95.00th=[16581], 00:20:53.813 | 99.00th=[17695], 99.50th=[18220], 99.90th=[22938], 99.95th=[22938], 00:20:53.813 | 99.99th=[23200] 00:20:53.813 bw ( KiB/s): min=23040, max=28416, per=33.32%, avg=27609.60, stdev=1525.35, samples=20 00:20:53.813 iops : min= 180, max= 222, avg=215.70, stdev=11.92, samples=20 00:20:53.813 lat (msec) : 20=99.86%, 50=0.14% 00:20:53.813 cpu : usr=91.17%, sys=8.14%, ctx=9, majf=0, minf=11 00:20:53.813 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:53.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.813 issued rwts: total=2160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.813 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:53.813 filename0: (groupid=0, jobs=1): err= 0: pid=87074: Wed Nov 20 15:11:22 2024 00:20:53.813 read: IOPS=215, BW=27.0MiB/s (28.3MB/s)(270MiB/10008msec) 00:20:53.813 slat (nsec): min=7940, max=49597, avg=18519.24, stdev=5828.09 00:20:53.813 clat (usec): min=13010, max=23093, avg=13856.81, stdev=1064.66 00:20:53.813 lat (usec): min=13018, max=23124, avg=13875.33, stdev=1064.64 00:20:53.813 clat percentiles (usec): 00:20:53.813 | 1.00th=[13173], 5.00th=[13304], 10.00th=[13304], 20.00th=[13435], 00:20:53.813 | 30.00th=[13435], 40.00th=[13435], 50.00th=[13566], 60.00th=[13566], 00:20:53.813 | 70.00th=[13566], 80.00th=[13698], 90.00th=[14877], 95.00th=[16581], 00:20:53.813 | 99.00th=[17957], 99.50th=[19792], 99.90th=[22938], 99.95th=[23200], 00:20:53.813 | 99.99th=[23200] 00:20:53.813 bw ( KiB/s): min=23040, max=28416, per=33.32%, avg=27606.65, stdev=1522.52, samples=20 00:20:53.813 iops : min= 180, max= 222, avg=215.65, stdev=11.89, samples=20 00:20:53.813 lat (msec) : 20=99.86%, 50=0.14% 00:20:53.813 cpu : usr=91.18%, sys=8.10%, ctx=11, majf=0, minf=9 00:20:53.813 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:53.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.813 issued rwts: total=2160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.813 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:53.813 00:20:53.813 Run status group 0 (all jobs): 00:20:53.813 READ: bw=80.9MiB/s (84.8MB/s), 27.0MiB/s-27.0MiB/s (28.3MB/s-28.3MB/s), io=810MiB (849MB), run=10008-10010msec 00:20:53.813 15:11:22 -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:53.813 15:11:22 -- target/dif.sh@43 -- # local sub 00:20:53.813 15:11:22 -- target/dif.sh@45 -- # for sub in "$@" 00:20:53.813 15:11:22 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:53.813 15:11:22 -- target/dif.sh@36 -- # local sub_id=0 00:20:53.813 15:11:22 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:53.813 15:11:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.813 15:11:22 -- common/autotest_common.sh@10 -- # set +x 00:20:53.813 15:11:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.813 15:11:22 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:53.813 15:11:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.813 15:11:22 -- common/autotest_common.sh@10 -- # set +x 00:20:53.813 ************************************ 00:20:53.813 END TEST fio_dif_digest 00:20:53.813 ************************************ 00:20:53.813 15:11:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.813 00:20:53.813 real 0m10.825s 00:20:53.813 user 0m27.979s 00:20:53.813 sys 0m2.571s 00:20:53.813 15:11:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:53.813 15:11:22 -- common/autotest_common.sh@10 -- # set +x 00:20:53.813 15:11:22 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:53.813 15:11:22 -- target/dif.sh@147 -- # nvmftestfini 00:20:53.813 15:11:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:53.813 15:11:22 -- nvmf/common.sh@116 -- # sync 00:20:53.813 15:11:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:53.813 15:11:22 -- nvmf/common.sh@119 -- # set +e 00:20:53.813 15:11:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:53.813 15:11:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:53.813 rmmod nvme_tcp 00:20:53.813 rmmod nvme_fabrics 00:20:53.813 rmmod nvme_keyring 00:20:53.813 15:11:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:53.813 15:11:22 -- nvmf/common.sh@123 -- # set -e 00:20:53.813 15:11:22 -- nvmf/common.sh@124 -- # return 0 00:20:53.813 15:11:22 -- nvmf/common.sh@477 -- # '[' -n 86322 ']' 00:20:53.813 15:11:22 -- nvmf/common.sh@478 -- # killprocess 86322 00:20:53.813 15:11:22 -- common/autotest_common.sh@936 -- # '[' -z 86322 ']' 00:20:53.813 15:11:22 -- common/autotest_common.sh@940 -- # kill -0 86322 00:20:53.813 15:11:22 -- common/autotest_common.sh@941 -- # uname 00:20:53.813 15:11:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:53.813 15:11:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86322 00:20:53.813 killing process with pid 86322 00:20:53.813 15:11:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:53.813 15:11:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:53.813 15:11:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86322' 00:20:53.813 15:11:22 -- common/autotest_common.sh@955 -- # kill 86322 00:20:53.813 15:11:22 -- common/autotest_common.sh@960 -- # wait 86322 00:20:53.813 15:11:22 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:20:53.813 15:11:22 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:53.813 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:53.813 Waiting for block devices as requested 00:20:53.813 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:20:53.813 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:20:53.813 15:11:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:53.813 15:11:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:53.813 15:11:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:53.813 15:11:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:53.813 15:11:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.813 15:11:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:53.813 15:11:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.814 15:11:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:53.814 00:20:53.814 real 0m58.029s 00:20:53.814 user 3m44.453s 00:20:53.814 sys 0m19.560s 00:20:53.814 15:11:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:53.814 ************************************ 00:20:53.814 END TEST nvmf_dif 00:20:53.814 ************************************ 00:20:53.814 15:11:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.814 15:11:23 -- spdk/autotest.sh@288 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:53.814 15:11:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:53.814 15:11:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:53.814 15:11:23 -- common/autotest_common.sh@10 -- # set +x 00:20:53.814 ************************************ 00:20:53.814 START TEST nvmf_abort_qd_sizes 00:20:53.814 ************************************ 00:20:53.814 15:11:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:53.814 * Looking for test storage... 00:20:53.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:53.814 15:11:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:53.814 15:11:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:53.814 15:11:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:53.814 15:11:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:53.814 15:11:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:53.814 15:11:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:53.814 15:11:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:53.814 15:11:23 -- scripts/common.sh@335 -- # IFS=.-: 00:20:53.814 15:11:23 -- scripts/common.sh@335 -- # read -ra ver1 00:20:53.814 15:11:23 -- scripts/common.sh@336 -- # IFS=.-: 00:20:53.814 15:11:23 -- scripts/common.sh@336 -- # read -ra ver2 00:20:53.814 15:11:23 -- scripts/common.sh@337 -- # local 'op=<' 00:20:53.814 15:11:23 -- scripts/common.sh@339 -- # ver1_l=2 00:20:53.814 15:11:23 -- scripts/common.sh@340 -- # ver2_l=1 00:20:53.814 15:11:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:53.814 15:11:23 -- scripts/common.sh@343 -- # case "$op" in 00:20:53.814 15:11:23 -- scripts/common.sh@344 -- # : 1 00:20:53.814 15:11:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:53.814 15:11:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:53.814 15:11:23 -- scripts/common.sh@364 -- # decimal 1 00:20:53.814 15:11:23 -- scripts/common.sh@352 -- # local d=1 00:20:53.814 15:11:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:53.814 15:11:23 -- scripts/common.sh@354 -- # echo 1 00:20:53.814 15:11:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:53.814 15:11:23 -- scripts/common.sh@365 -- # decimal 2 00:20:53.814 15:11:23 -- scripts/common.sh@352 -- # local d=2 00:20:53.814 15:11:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:53.814 15:11:23 -- scripts/common.sh@354 -- # echo 2 00:20:53.814 15:11:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:53.814 15:11:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:53.814 15:11:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:53.814 15:11:23 -- scripts/common.sh@367 -- # return 0 00:20:53.814 15:11:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:53.814 15:11:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:53.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.814 --rc genhtml_branch_coverage=1 00:20:53.814 --rc genhtml_function_coverage=1 00:20:53.814 --rc genhtml_legend=1 00:20:53.814 --rc geninfo_all_blocks=1 00:20:53.814 --rc geninfo_unexecuted_blocks=1 00:20:53.814 00:20:53.814 ' 00:20:53.814 15:11:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:53.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.814 --rc genhtml_branch_coverage=1 00:20:53.814 --rc genhtml_function_coverage=1 00:20:53.814 --rc genhtml_legend=1 00:20:53.814 --rc geninfo_all_blocks=1 00:20:53.814 --rc geninfo_unexecuted_blocks=1 00:20:53.814 00:20:53.814 ' 00:20:53.814 15:11:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:53.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.814 --rc genhtml_branch_coverage=1 00:20:53.814 --rc genhtml_function_coverage=1 00:20:53.814 --rc genhtml_legend=1 00:20:53.814 --rc geninfo_all_blocks=1 00:20:53.814 --rc geninfo_unexecuted_blocks=1 00:20:53.814 00:20:53.814 ' 00:20:53.814 15:11:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:53.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:53.814 --rc genhtml_branch_coverage=1 00:20:53.814 --rc genhtml_function_coverage=1 00:20:53.814 --rc genhtml_legend=1 00:20:53.814 --rc geninfo_all_blocks=1 00:20:53.814 --rc geninfo_unexecuted_blocks=1 00:20:53.814 00:20:53.814 ' 00:20:53.814 15:11:23 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:53.814 15:11:23 -- nvmf/common.sh@7 -- # uname -s 00:20:53.814 15:11:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:53.814 15:11:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:53.814 15:11:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:53.814 15:11:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:53.814 15:11:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:53.814 15:11:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:53.814 15:11:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:53.814 15:11:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:53.814 15:11:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:53.814 15:11:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:53.814 15:11:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:20:53.814 15:11:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=79d23ff5-0d62-4c46-bc89-ee8b51440ece 00:20:53.814 15:11:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:53.814 15:11:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:53.814 15:11:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:53.814 15:11:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:53.814 15:11:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.814 15:11:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.814 15:11:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.814 15:11:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.814 15:11:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.814 15:11:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.814 15:11:23 -- paths/export.sh@5 -- # export PATH 00:20:53.814 15:11:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.814 15:11:23 -- nvmf/common.sh@46 -- # : 0 00:20:53.814 15:11:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:53.814 15:11:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:53.815 15:11:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:53.815 15:11:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:53.815 15:11:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:53.815 15:11:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:53.815 15:11:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:53.815 15:11:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:53.815 15:11:23 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:20:53.815 15:11:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:53.815 15:11:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.815 15:11:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:53.815 15:11:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:53.815 15:11:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:53.815 15:11:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.815 15:11:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:53.815 15:11:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.815 15:11:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:53.815 15:11:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:53.815 15:11:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:53.815 15:11:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:53.815 15:11:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:53.815 15:11:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:53.815 15:11:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:53.815 15:11:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:53.815 15:11:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:53.815 15:11:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:53.815 15:11:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:53.815 15:11:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:53.815 15:11:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:53.815 15:11:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:53.815 15:11:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:53.815 15:11:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:53.815 15:11:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:53.815 15:11:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:53.815 15:11:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:53.815 15:11:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:53.815 Cannot find device "nvmf_tgt_br" 00:20:53.815 15:11:23 -- nvmf/common.sh@154 -- # true 00:20:53.815 15:11:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:53.815 Cannot find device "nvmf_tgt_br2" 00:20:53.815 15:11:23 -- nvmf/common.sh@155 -- # true 00:20:53.815 15:11:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:53.815 15:11:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:53.815 Cannot find device "nvmf_tgt_br" 00:20:53.815 15:11:23 -- nvmf/common.sh@157 -- # true 00:20:53.815 15:11:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:53.815 Cannot find device "nvmf_tgt_br2" 00:20:53.815 15:11:23 -- nvmf/common.sh@158 -- # true 00:20:53.815 15:11:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:53.815 15:11:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:53.815 15:11:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:53.815 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:53.815 15:11:23 -- nvmf/common.sh@161 -- # true 00:20:53.815 15:11:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:53.815 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:53.815 15:11:23 -- nvmf/common.sh@162 -- # true 00:20:53.815 15:11:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:53.815 15:11:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:53.815 15:11:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:53.815 15:11:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:53.815 15:11:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:53.815 15:11:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:53.815 15:11:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:53.815 15:11:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:53.815 15:11:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:53.815 15:11:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:53.815 15:11:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:53.815 15:11:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:53.815 15:11:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:53.815 15:11:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:53.815 15:11:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:53.815 15:11:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:53.815 15:11:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:53.815 15:11:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:53.815 15:11:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:53.815 15:11:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:53.815 15:11:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:53.815 15:11:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:53.815 15:11:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:53.815 15:11:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:53.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:53.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:20:53.815 00:20:53.815 --- 10.0.0.2 ping statistics --- 00:20:53.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.815 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:20:53.815 15:11:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:53.815 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:53.815 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:20:53.815 00:20:53.815 --- 10.0.0.3 ping statistics --- 00:20:53.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.815 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:20:53.815 15:11:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:53.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:53.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:20:53.815 00:20:53.815 --- 10.0.0.1 ping statistics --- 00:20:53.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.815 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:53.815 15:11:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:53.815 15:11:24 -- nvmf/common.sh@421 -- # return 0 00:20:53.815 15:11:24 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:20:53.815 15:11:24 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:54.074 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:54.333 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:20:54.333 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:20:54.333 15:11:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:54.333 15:11:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:54.333 15:11:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:54.333 15:11:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:54.333 15:11:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:54.333 15:11:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:54.333 15:11:25 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:20:54.333 15:11:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:54.333 15:11:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:54.333 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:20:54.333 15:11:25 -- nvmf/common.sh@469 -- # nvmfpid=87672 00:20:54.333 15:11:25 -- nvmf/common.sh@470 -- # waitforlisten 87672 00:20:54.333 15:11:25 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:20:54.333 15:11:25 -- common/autotest_common.sh@829 -- # '[' -z 87672 ']' 00:20:54.333 15:11:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.333 15:11:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:54.333 15:11:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.333 15:11:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:54.333 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:20:54.333 [2024-11-20 15:11:25.069962] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:54.333 [2024-11-20 15:11:25.070053] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.591 [2024-11-20 15:11:25.212717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:54.591 [2024-11-20 15:11:25.254363] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:54.591 [2024-11-20 15:11:25.254527] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.591 [2024-11-20 15:11:25.254542] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.591 [2024-11-20 15:11:25.254552] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.591 [2024-11-20 15:11:25.254631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.591 [2024-11-20 15:11:25.255445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.591 [2024-11-20 15:11:25.255584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:54.591 [2024-11-20 15:11:25.255593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.591 15:11:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:54.591 15:11:25 -- common/autotest_common.sh@862 -- # return 0 00:20:54.591 15:11:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:54.591 15:11:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:54.591 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:20:54.591 15:11:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.591 15:11:25 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:20:54.591 15:11:25 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:20:54.591 15:11:25 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:20:54.591 15:11:25 -- scripts/common.sh@311 -- # local bdf bdfs 00:20:54.591 15:11:25 -- scripts/common.sh@312 -- # local nvmes 00:20:54.591 15:11:25 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:20:54.591 15:11:25 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:20:54.591 15:11:25 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:20:54.591 15:11:25 -- scripts/common.sh@297 -- # local bdf= 00:20:54.591 15:11:25 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:20:54.591 15:11:25 -- scripts/common.sh@232 -- # local class 00:20:54.591 15:11:25 -- scripts/common.sh@233 -- # local subclass 00:20:54.591 15:11:25 -- scripts/common.sh@234 -- # local progif 00:20:54.591 15:11:25 -- scripts/common.sh@235 -- # printf %02x 1 00:20:54.591 15:11:25 -- scripts/common.sh@235 -- # class=01 00:20:54.591 15:11:25 -- scripts/common.sh@236 -- # printf %02x 8 00:20:54.591 15:11:25 -- scripts/common.sh@236 -- # subclass=08 00:20:54.591 15:11:25 -- scripts/common.sh@237 -- # printf %02x 2 00:20:54.591 15:11:25 -- scripts/common.sh@237 -- # progif=02 00:20:54.591 15:11:25 -- scripts/common.sh@239 -- # hash lspci 00:20:54.591 15:11:25 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:20:54.591 15:11:25 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:20:54.591 15:11:25 -- scripts/common.sh@242 -- # grep -i -- -p02 00:20:54.850 15:11:25 -- scripts/common.sh@244 -- # tr -d '"' 00:20:54.850 15:11:25 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:20:54.850 15:11:25 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:54.850 15:11:25 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:20:54.850 15:11:25 -- scripts/common.sh@15 -- # local i 00:20:54.850 15:11:25 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:20:54.850 15:11:25 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:54.850 15:11:25 -- scripts/common.sh@24 -- # return 0 00:20:54.850 15:11:25 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:20:54.850 15:11:25 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:54.850 15:11:25 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:20:54.850 15:11:25 -- scripts/common.sh@15 -- # local i 00:20:54.850 15:11:25 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:20:54.850 15:11:25 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:54.850 15:11:25 -- scripts/common.sh@24 -- # return 0 00:20:54.850 15:11:25 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:20:54.850 15:11:25 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:20:54.850 15:11:25 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:20:54.850 15:11:25 -- scripts/common.sh@322 -- # uname -s 00:20:54.850 15:11:25 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:20:54.850 15:11:25 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:20:54.850 15:11:25 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:20:54.850 15:11:25 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:20:54.850 15:11:25 -- scripts/common.sh@322 -- # uname -s 00:20:54.850 15:11:25 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:20:54.850 15:11:25 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:20:54.850 15:11:25 -- scripts/common.sh@327 -- # (( 2 )) 00:20:54.850 15:11:25 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:20:54.850 15:11:25 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:20:54.850 15:11:25 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:20:54.850 15:11:25 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:20:54.850 15:11:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:54.850 15:11:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:54.850 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:20:54.850 ************************************ 00:20:54.850 START TEST spdk_target_abort 00:20:54.850 ************************************ 00:20:54.850 15:11:25 -- common/autotest_common.sh@1114 -- # spdk_target 00:20:54.850 15:11:25 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:20:54.850 15:11:25 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:20:54.850 15:11:25 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:20:54.850 15:11:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.850 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:20:54.850 spdk_targetn1 00:20:54.850 15:11:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.850 15:11:25 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:54.850 15:11:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.850 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:20:54.850 [2024-11-20 15:11:25.504370] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.850 15:11:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.850 15:11:25 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:20:54.850 15:11:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.850 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:20:54.850 15:11:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.850 15:11:25 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:20:54.850 15:11:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.850 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:20:54.850 15:11:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.850 15:11:25 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:20:54.850 15:11:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.850 15:11:25 -- common/autotest_common.sh@10 -- # set +x 00:20:54.850 [2024-11-20 15:11:25.536582] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.850 15:11:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.850 15:11:25 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:20:54.850 15:11:25 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:54.850 15:11:25 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:54.851 15:11:25 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:20:54.851 15:11:25 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:54.851 15:11:25 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:20:54.851 15:11:25 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:54.851 15:11:25 -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:54.851 15:11:25 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:54.851 15:11:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:54.851 15:11:25 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:54.851 15:11:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:54.851 15:11:25 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:54.851 15:11:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:54.851 15:11:25 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:20:54.851 15:11:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:54.851 15:11:25 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:54.851 15:11:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:54.851 15:11:25 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:54.851 15:11:25 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:54.851 15:11:25 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:58.265 Initializing NVMe Controllers 00:20:58.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:20:58.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:20:58.265 Initialization complete. Launching workers. 00:20:58.265 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 11307, failed: 0 00:20:58.265 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1006, failed to submit 10301 00:20:58.265 success 753, unsuccess 253, failed 0 00:20:58.265 15:11:28 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:58.265 15:11:28 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:21:01.561 Initializing NVMe Controllers 00:21:01.561 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:21:01.561 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:21:01.561 Initialization complete. Launching workers. 00:21:01.561 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8976, failed: 0 00:21:01.561 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1169, failed to submit 7807 00:21:01.561 success 391, unsuccess 778, failed 0 00:21:01.561 15:11:32 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:01.561 15:11:32 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:21:04.918 Initializing NVMe Controllers 00:21:04.918 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:21:04.918 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:21:04.918 Initialization complete. Launching workers. 00:21:04.918 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31729, failed: 0 00:21:04.918 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2198, failed to submit 29531 00:21:04.918 success 485, unsuccess 1713, failed 0 00:21:04.918 15:11:35 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:21:04.918 15:11:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.918 15:11:35 -- common/autotest_common.sh@10 -- # set +x 00:21:04.918 15:11:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.918 15:11:35 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:21:04.918 15:11:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.918 15:11:35 -- common/autotest_common.sh@10 -- # set +x 00:21:04.918 15:11:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.918 15:11:35 -- target/abort_qd_sizes.sh@62 -- # killprocess 87672 00:21:04.918 15:11:35 -- common/autotest_common.sh@936 -- # '[' -z 87672 ']' 00:21:04.918 15:11:35 -- common/autotest_common.sh@940 -- # kill -0 87672 00:21:04.918 15:11:35 -- common/autotest_common.sh@941 -- # uname 00:21:04.918 15:11:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:04.918 15:11:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87672 00:21:04.918 15:11:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:04.918 15:11:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:04.918 killing process with pid 87672 00:21:04.918 15:11:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87672' 00:21:04.918 15:11:35 -- common/autotest_common.sh@955 -- # kill 87672 00:21:04.918 15:11:35 -- common/autotest_common.sh@960 -- # wait 87672 00:21:05.178 00:21:05.178 real 0m10.429s 00:21:05.178 user 0m39.860s 00:21:05.178 sys 0m2.019s 00:21:05.178 15:11:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:05.178 15:11:35 -- common/autotest_common.sh@10 -- # set +x 00:21:05.178 ************************************ 00:21:05.178 END TEST spdk_target_abort 00:21:05.178 ************************************ 00:21:05.178 15:11:35 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:21:05.178 15:11:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:05.178 15:11:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:05.178 15:11:35 -- common/autotest_common.sh@10 -- # set +x 00:21:05.178 ************************************ 00:21:05.178 START TEST kernel_target_abort 00:21:05.178 ************************************ 00:21:05.178 15:11:35 -- common/autotest_common.sh@1114 -- # kernel_target 00:21:05.178 15:11:35 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:21:05.178 15:11:35 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:21:05.178 15:11:35 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:21:05.178 15:11:35 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:21:05.178 15:11:35 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:21:05.178 15:11:35 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:21:05.178 15:11:35 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:21:05.178 15:11:35 -- nvmf/common.sh@627 -- # local block nvme 00:21:05.178 15:11:35 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:21:05.178 15:11:35 -- nvmf/common.sh@630 -- # modprobe nvmet 00:21:05.178 15:11:35 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:21:05.178 15:11:35 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:05.436 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:05.695 Waiting for block devices as requested 00:21:05.695 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:21:05.695 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:21:05.953 15:11:36 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:21:05.953 15:11:36 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:21:05.953 15:11:36 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:21:05.953 15:11:36 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:21:05.953 15:11:36 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:21:05.953 No valid GPT data, bailing 00:21:05.953 15:11:36 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:21:05.953 15:11:36 -- scripts/common.sh@393 -- # pt= 00:21:05.953 15:11:36 -- scripts/common.sh@394 -- # return 1 00:21:05.953 15:11:36 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:21:05.953 15:11:36 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:21:05.953 15:11:36 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:21:05.953 15:11:36 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:21:05.953 15:11:36 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:21:05.953 15:11:36 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:21:05.953 No valid GPT data, bailing 00:21:05.953 15:11:36 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:21:05.953 15:11:36 -- scripts/common.sh@393 -- # pt= 00:21:05.953 15:11:36 -- scripts/common.sh@394 -- # return 1 00:21:05.953 15:11:36 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:21:05.953 15:11:36 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:21:05.953 15:11:36 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:21:05.953 15:11:36 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:21:05.953 15:11:36 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:21:05.953 15:11:36 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:21:05.953 No valid GPT data, bailing 00:21:05.953 15:11:36 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:21:05.953 15:11:36 -- scripts/common.sh@393 -- # pt= 00:21:05.953 15:11:36 -- scripts/common.sh@394 -- # return 1 00:21:05.953 15:11:36 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:21:05.953 15:11:36 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:21:05.953 15:11:36 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:21:05.953 15:11:36 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:21:05.953 15:11:36 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:21:05.953 15:11:36 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:21:06.212 No valid GPT data, bailing 00:21:06.212 15:11:36 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:21:06.212 15:11:36 -- scripts/common.sh@393 -- # pt= 00:21:06.212 15:11:36 -- scripts/common.sh@394 -- # return 1 00:21:06.212 15:11:36 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:21:06.212 15:11:36 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:21:06.212 15:11:36 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:21:06.212 15:11:36 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:21:06.212 15:11:36 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:21:06.212 15:11:36 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:21:06.212 15:11:36 -- nvmf/common.sh@654 -- # echo 1 00:21:06.212 15:11:36 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:21:06.212 15:11:36 -- nvmf/common.sh@656 -- # echo 1 00:21:06.212 15:11:36 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:21:06.212 15:11:36 -- nvmf/common.sh@663 -- # echo tcp 00:21:06.212 15:11:36 -- nvmf/common.sh@664 -- # echo 4420 00:21:06.212 15:11:36 -- nvmf/common.sh@665 -- # echo ipv4 00:21:06.212 15:11:36 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:21:06.212 15:11:36 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:79d23ff5-0d62-4c46-bc89-ee8b51440ece --hostid=79d23ff5-0d62-4c46-bc89-ee8b51440ece -a 10.0.0.1 -t tcp -s 4420 00:21:06.212 00:21:06.212 Discovery Log Number of Records 2, Generation counter 2 00:21:06.212 =====Discovery Log Entry 0====== 00:21:06.212 trtype: tcp 00:21:06.212 adrfam: ipv4 00:21:06.212 subtype: current discovery subsystem 00:21:06.212 treq: not specified, sq flow control disable supported 00:21:06.212 portid: 1 00:21:06.212 trsvcid: 4420 00:21:06.212 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:21:06.212 traddr: 10.0.0.1 00:21:06.212 eflags: none 00:21:06.212 sectype: none 00:21:06.212 =====Discovery Log Entry 1====== 00:21:06.212 trtype: tcp 00:21:06.212 adrfam: ipv4 00:21:06.212 subtype: nvme subsystem 00:21:06.212 treq: not specified, sq flow control disable supported 00:21:06.212 portid: 1 00:21:06.212 trsvcid: 4420 00:21:06.212 subnqn: kernel_target 00:21:06.212 traddr: 10.0.0.1 00:21:06.212 eflags: none 00:21:06.212 sectype: none 00:21:06.212 15:11:36 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:21:06.212 15:11:36 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:21:06.212 15:11:36 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:21:06.212 15:11:36 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:21:06.212 15:11:36 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:21:06.212 15:11:36 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:21:06.212 15:11:36 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:21:06.212 15:11:36 -- target/abort_qd_sizes.sh@24 -- # local target r 00:21:06.212 15:11:36 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:21:06.212 15:11:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:06.212 15:11:36 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:21:06.212 15:11:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:06.212 15:11:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:21:06.212 15:11:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:06.212 15:11:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:21:06.212 15:11:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:06.212 15:11:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:21:06.212 15:11:36 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:21:06.212 15:11:36 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:21:06.212 15:11:36 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:06.212 15:11:36 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:21:09.500 Initializing NVMe Controllers 00:21:09.500 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:21:09.500 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:21:09.500 Initialization complete. Launching workers. 00:21:09.500 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 34311, failed: 0 00:21:09.500 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 34311, failed to submit 0 00:21:09.500 success 0, unsuccess 34311, failed 0 00:21:09.500 15:11:40 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:09.500 15:11:40 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:21:12.798 Initializing NVMe Controllers 00:21:12.798 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:21:12.798 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:21:12.798 Initialization complete. Launching workers. 00:21:12.798 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 68523, failed: 0 00:21:12.798 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 29894, failed to submit 38629 00:21:12.798 success 0, unsuccess 29894, failed 0 00:21:12.798 15:11:43 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:12.798 15:11:43 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:21:16.073 Initializing NVMe Controllers 00:21:16.073 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:21:16.073 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:21:16.073 Initialization complete. Launching workers. 00:21:16.073 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 78035, failed: 0 00:21:16.073 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 19482, failed to submit 58553 00:21:16.073 success 0, unsuccess 19482, failed 0 00:21:16.073 15:11:46 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:21:16.073 15:11:46 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:21:16.073 15:11:46 -- nvmf/common.sh@677 -- # echo 0 00:21:16.073 15:11:46 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:21:16.073 15:11:46 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:21:16.073 15:11:46 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:16.073 15:11:46 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:21:16.073 15:11:46 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:21:16.073 15:11:46 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:21:16.073 00:21:16.073 real 0m10.564s 00:21:16.073 user 0m6.027s 00:21:16.073 sys 0m1.989s 00:21:16.073 15:11:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:16.073 15:11:46 -- common/autotest_common.sh@10 -- # set +x 00:21:16.073 ************************************ 00:21:16.073 END TEST kernel_target_abort 00:21:16.073 ************************************ 00:21:16.073 15:11:46 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:21:16.073 15:11:46 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:21:16.073 15:11:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:16.073 15:11:46 -- nvmf/common.sh@116 -- # sync 00:21:16.073 15:11:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:16.073 15:11:46 -- nvmf/common.sh@119 -- # set +e 00:21:16.073 15:11:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:16.073 15:11:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:16.073 rmmod nvme_tcp 00:21:16.073 rmmod nvme_fabrics 00:21:16.073 rmmod nvme_keyring 00:21:16.073 15:11:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:16.073 15:11:46 -- nvmf/common.sh@123 -- # set -e 00:21:16.073 15:11:46 -- nvmf/common.sh@124 -- # return 0 00:21:16.073 15:11:46 -- nvmf/common.sh@477 -- # '[' -n 87672 ']' 00:21:16.073 15:11:46 -- nvmf/common.sh@478 -- # killprocess 87672 00:21:16.073 15:11:46 -- common/autotest_common.sh@936 -- # '[' -z 87672 ']' 00:21:16.073 15:11:46 -- common/autotest_common.sh@940 -- # kill -0 87672 00:21:16.073 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (87672) - No such process 00:21:16.073 Process with pid 87672 is not found 00:21:16.073 15:11:46 -- common/autotest_common.sh@963 -- # echo 'Process with pid 87672 is not found' 00:21:16.073 15:11:46 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:21:16.073 15:11:46 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:16.638 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:16.638 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:21:16.638 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:21:16.638 15:11:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:16.638 15:11:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:16.638 15:11:47 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:16.638 15:11:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:16.638 15:11:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.638 15:11:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:16.638 15:11:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.638 15:11:47 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:16.638 ************************************ 00:21:16.638 END TEST nvmf_abort_qd_sizes 00:21:16.638 ************************************ 00:21:16.638 00:21:16.638 real 0m23.788s 00:21:16.638 user 0m47.137s 00:21:16.638 sys 0m5.273s 00:21:16.638 15:11:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:16.638 15:11:47 -- common/autotest_common.sh@10 -- # set +x 00:21:16.638 15:11:47 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:21:16.638 15:11:47 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:21:16.638 15:11:47 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:21:16.638 15:11:47 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:16.638 15:11:47 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:21:16.638 15:11:47 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:21:16.638 15:11:47 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:21:16.638 15:11:47 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:16.638 15:11:47 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:21:16.638 15:11:47 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:16.638 15:11:47 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:16.638 15:11:47 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:21:16.638 15:11:47 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:21:16.638 15:11:47 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:21:16.638 15:11:47 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:21:16.638 15:11:47 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:21:16.638 15:11:47 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:21:16.638 15:11:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:16.638 15:11:47 -- common/autotest_common.sh@10 -- # set +x 00:21:16.638 15:11:47 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:21:16.638 15:11:47 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:21:16.638 15:11:47 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:21:16.638 15:11:47 -- common/autotest_common.sh@10 -- # set +x 00:21:18.537 INFO: APP EXITING 00:21:18.537 INFO: killing all VMs 00:21:18.537 INFO: killing vhost app 00:21:18.537 INFO: EXIT DONE 00:21:19.102 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:19.102 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:21:19.102 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:21:19.666 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:19.666 Cleaning 00:21:19.666 Removing: /var/run/dpdk/spdk0/config 00:21:19.666 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:19.666 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:19.666 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:19.666 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:19.666 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:19.666 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:19.666 Removing: /var/run/dpdk/spdk1/config 00:21:19.666 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:19.924 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:19.924 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:19.924 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:19.925 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:19.925 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:19.925 Removing: /var/run/dpdk/spdk2/config 00:21:19.925 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:19.925 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:19.925 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:19.925 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:19.925 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:19.925 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:19.925 Removing: /var/run/dpdk/spdk3/config 00:21:19.925 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:19.925 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:19.925 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:19.925 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:19.925 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:19.925 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:19.925 Removing: /var/run/dpdk/spdk4/config 00:21:19.925 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:19.925 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:19.925 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:19.925 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:19.925 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:19.925 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:19.925 Removing: /dev/shm/nvmf_trace.0 00:21:19.925 Removing: /dev/shm/spdk_tgt_trace.pid65613 00:21:19.925 Removing: /var/run/dpdk/spdk0 00:21:19.925 Removing: /var/run/dpdk/spdk1 00:21:19.925 Removing: /var/run/dpdk/spdk2 00:21:19.925 Removing: /var/run/dpdk/spdk3 00:21:19.925 Removing: /var/run/dpdk/spdk4 00:21:19.925 Removing: /var/run/dpdk/spdk_pid65466 00:21:19.925 Removing: /var/run/dpdk/spdk_pid65613 00:21:19.925 Removing: /var/run/dpdk/spdk_pid65866 00:21:19.925 Removing: /var/run/dpdk/spdk_pid66062 00:21:19.925 Removing: /var/run/dpdk/spdk_pid66214 00:21:19.925 Removing: /var/run/dpdk/spdk_pid66281 00:21:19.925 Removing: /var/run/dpdk/spdk_pid66364 00:21:19.925 Removing: /var/run/dpdk/spdk_pid66462 00:21:19.925 Removing: /var/run/dpdk/spdk_pid66546 00:21:19.925 Removing: /var/run/dpdk/spdk_pid66579 00:21:19.925 Removing: /var/run/dpdk/spdk_pid66609 00:21:19.925 Removing: /var/run/dpdk/spdk_pid66683 00:21:19.925 Removing: /var/run/dpdk/spdk_pid66770 00:21:19.925 Removing: /var/run/dpdk/spdk_pid67213 00:21:19.925 Removing: /var/run/dpdk/spdk_pid67265 00:21:19.925 Removing: /var/run/dpdk/spdk_pid67316 00:21:19.925 Removing: /var/run/dpdk/spdk_pid67332 00:21:19.925 Removing: /var/run/dpdk/spdk_pid67399 00:21:19.925 Removing: /var/run/dpdk/spdk_pid67415 00:21:19.925 Removing: /var/run/dpdk/spdk_pid67471 00:21:19.925 Removing: /var/run/dpdk/spdk_pid67487 00:21:19.925 Removing: /var/run/dpdk/spdk_pid67538 00:21:19.925 Removing: /var/run/dpdk/spdk_pid67556 00:21:19.925 Removing: /var/run/dpdk/spdk_pid67596 00:21:19.925 Removing: /var/run/dpdk/spdk_pid67614 00:21:19.925 Removing: /var/run/dpdk/spdk_pid67749 00:21:19.925 Removing: /var/run/dpdk/spdk_pid67779 00:21:19.925 Removing: /var/run/dpdk/spdk_pid67861 00:21:19.925 Removing: /var/run/dpdk/spdk_pid67914 00:21:19.925 Removing: /var/run/dpdk/spdk_pid67939 00:21:19.925 Removing: /var/run/dpdk/spdk_pid67997 00:21:19.925 Removing: /var/run/dpdk/spdk_pid68011 00:21:19.925 Removing: /var/run/dpdk/spdk_pid68046 00:21:19.925 Removing: /var/run/dpdk/spdk_pid68060 00:21:19.925 Removing: /var/run/dpdk/spdk_pid68094 00:21:19.925 Removing: /var/run/dpdk/spdk_pid68114 00:21:19.925 Removing: /var/run/dpdk/spdk_pid68143 00:21:19.925 Removing: /var/run/dpdk/spdk_pid68162 00:21:19.925 Removing: /var/run/dpdk/spdk_pid68197 00:21:19.925 Removing: /var/run/dpdk/spdk_pid68213 00:21:19.925 Removing: /var/run/dpdk/spdk_pid68247 00:21:19.925 Removing: /var/run/dpdk/spdk_pid68267 00:21:19.925 Removing: /var/run/dpdk/spdk_pid68297 00:21:19.925 Removing: /var/run/dpdk/spdk_pid68311 00:21:19.925 Removing: /var/run/dpdk/spdk_pid68351 00:21:19.925 Removing: /var/run/dpdk/spdk_pid68365 00:21:19.925 Removing: /var/run/dpdk/spdk_pid68394 00:21:19.925 Removing: /var/run/dpdk/spdk_pid68419 00:21:19.925 Removing: /var/run/dpdk/spdk_pid68448 00:21:19.925 Removing: /var/run/dpdk/spdk_pid68462 00:21:19.925 Removing: /var/run/dpdk/spdk_pid68502 00:21:19.925 Removing: /var/run/dpdk/spdk_pid68516 00:21:19.925 Removing: /var/run/dpdk/spdk_pid68545 00:21:19.925 Removing: /var/run/dpdk/spdk_pid68565 00:21:19.925 Removing: /var/run/dpdk/spdk_pid68599 00:21:20.183 Removing: /var/run/dpdk/spdk_pid68613 00:21:20.183 Removing: /var/run/dpdk/spdk_pid68648 00:21:20.183 Removing: /var/run/dpdk/spdk_pid68667 00:21:20.183 Removing: /var/run/dpdk/spdk_pid68702 00:21:20.183 Removing: /var/run/dpdk/spdk_pid68716 00:21:20.183 Removing: /var/run/dpdk/spdk_pid68749 00:21:20.183 Removing: /var/run/dpdk/spdk_pid68770 00:21:20.183 Removing: /var/run/dpdk/spdk_pid68799 00:21:20.183 Removing: /var/run/dpdk/spdk_pid68827 00:21:20.183 Removing: /var/run/dpdk/spdk_pid68859 00:21:20.183 Removing: /var/run/dpdk/spdk_pid68886 00:21:20.183 Removing: /var/run/dpdk/spdk_pid68919 00:21:20.183 Removing: /var/run/dpdk/spdk_pid68940 00:21:20.183 Removing: /var/run/dpdk/spdk_pid68973 00:21:20.183 Removing: /var/run/dpdk/spdk_pid68993 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69027 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69101 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69199 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69533 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69545 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69587 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69594 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69612 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69631 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69638 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69657 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69675 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69688 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69701 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69725 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69737 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69751 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69769 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69781 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69795 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69813 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69825 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69839 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69874 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69881 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69914 00:21:20.183 Removing: /var/run/dpdk/spdk_pid69978 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70005 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70009 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70043 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70047 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70060 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70095 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70112 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70133 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70141 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70148 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70150 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70163 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70165 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70173 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70180 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70207 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70233 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70237 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70270 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70275 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70283 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70323 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70335 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70361 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70363 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70371 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70378 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70386 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70393 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70401 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70403 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70484 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70526 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70636 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70669 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70713 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70722 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70742 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70757 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70786 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70805 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70877 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70885 00:21:20.183 Removing: /var/run/dpdk/spdk_pid70934 00:21:20.443 Removing: /var/run/dpdk/spdk_pid71016 00:21:20.443 Removing: /var/run/dpdk/spdk_pid71083 00:21:20.443 Removing: /var/run/dpdk/spdk_pid71101 00:21:20.443 Removing: /var/run/dpdk/spdk_pid71202 00:21:20.443 Removing: /var/run/dpdk/spdk_pid71240 00:21:20.443 Removing: /var/run/dpdk/spdk_pid71276 00:21:20.443 Removing: /var/run/dpdk/spdk_pid71495 00:21:20.443 Removing: /var/run/dpdk/spdk_pid71587 00:21:20.443 Removing: /var/run/dpdk/spdk_pid71609 00:21:20.443 Removing: /var/run/dpdk/spdk_pid71955 00:21:20.443 Removing: /var/run/dpdk/spdk_pid71986 00:21:20.443 Removing: /var/run/dpdk/spdk_pid72305 00:21:20.443 Removing: /var/run/dpdk/spdk_pid72724 00:21:20.443 Removing: /var/run/dpdk/spdk_pid73005 00:21:20.443 Removing: /var/run/dpdk/spdk_pid73755 00:21:20.443 Removing: /var/run/dpdk/spdk_pid74609 00:21:20.443 Removing: /var/run/dpdk/spdk_pid74722 00:21:20.443 Removing: /var/run/dpdk/spdk_pid74784 00:21:20.443 Removing: /var/run/dpdk/spdk_pid76069 00:21:20.443 Removing: /var/run/dpdk/spdk_pid76280 00:21:20.443 Removing: /var/run/dpdk/spdk_pid76610 00:21:20.443 Removing: /var/run/dpdk/spdk_pid76722 00:21:20.443 Removing: /var/run/dpdk/spdk_pid76857 00:21:20.443 Removing: /var/run/dpdk/spdk_pid76877 00:21:20.443 Removing: /var/run/dpdk/spdk_pid76905 00:21:20.443 Removing: /var/run/dpdk/spdk_pid76925 00:21:20.443 Removing: /var/run/dpdk/spdk_pid77023 00:21:20.443 Removing: /var/run/dpdk/spdk_pid77158 00:21:20.443 Removing: /var/run/dpdk/spdk_pid77313 00:21:20.443 Removing: /var/run/dpdk/spdk_pid77386 00:21:20.443 Removing: /var/run/dpdk/spdk_pid77782 00:21:20.443 Removing: /var/run/dpdk/spdk_pid78124 00:21:20.443 Removing: /var/run/dpdk/spdk_pid78126 00:21:20.443 Removing: /var/run/dpdk/spdk_pid80329 00:21:20.443 Removing: /var/run/dpdk/spdk_pid80331 00:21:20.443 Removing: /var/run/dpdk/spdk_pid80607 00:21:20.443 Removing: /var/run/dpdk/spdk_pid80621 00:21:20.443 Removing: /var/run/dpdk/spdk_pid80641 00:21:20.443 Removing: /var/run/dpdk/spdk_pid80666 00:21:20.443 Removing: /var/run/dpdk/spdk_pid80671 00:21:20.443 Removing: /var/run/dpdk/spdk_pid80771 00:21:20.443 Removing: /var/run/dpdk/spdk_pid80774 00:21:20.443 Removing: /var/run/dpdk/spdk_pid80882 00:21:20.443 Removing: /var/run/dpdk/spdk_pid80884 00:21:20.443 Removing: /var/run/dpdk/spdk_pid80992 00:21:20.443 Removing: /var/run/dpdk/spdk_pid81000 00:21:20.443 Removing: /var/run/dpdk/spdk_pid81409 00:21:20.443 Removing: /var/run/dpdk/spdk_pid81456 00:21:20.443 Removing: /var/run/dpdk/spdk_pid81566 00:21:20.443 Removing: /var/run/dpdk/spdk_pid81650 00:21:20.443 Removing: /var/run/dpdk/spdk_pid81962 00:21:20.443 Removing: /var/run/dpdk/spdk_pid82160 00:21:20.443 Removing: /var/run/dpdk/spdk_pid82543 00:21:20.443 Removing: /var/run/dpdk/spdk_pid83075 00:21:20.443 Removing: /var/run/dpdk/spdk_pid83508 00:21:20.443 Removing: /var/run/dpdk/spdk_pid83573 00:21:20.443 Removing: /var/run/dpdk/spdk_pid83621 00:21:20.443 Removing: /var/run/dpdk/spdk_pid83676 00:21:20.443 Removing: /var/run/dpdk/spdk_pid83775 00:21:20.443 Removing: /var/run/dpdk/spdk_pid83823 00:21:20.443 Removing: /var/run/dpdk/spdk_pid83885 00:21:20.443 Removing: /var/run/dpdk/spdk_pid83932 00:21:20.443 Removing: /var/run/dpdk/spdk_pid84271 00:21:20.443 Removing: /var/run/dpdk/spdk_pid85450 00:21:20.443 Removing: /var/run/dpdk/spdk_pid85589 00:21:20.443 Removing: /var/run/dpdk/spdk_pid85824 00:21:20.443 Removing: /var/run/dpdk/spdk_pid86367 00:21:20.443 Removing: /var/run/dpdk/spdk_pid86528 00:21:20.443 Removing: /var/run/dpdk/spdk_pid86689 00:21:20.443 Removing: /var/run/dpdk/spdk_pid86783 00:21:20.443 Removing: /var/run/dpdk/spdk_pid86953 00:21:20.443 Removing: /var/run/dpdk/spdk_pid87062 00:21:20.443 Removing: /var/run/dpdk/spdk_pid87716 00:21:20.443 Removing: /var/run/dpdk/spdk_pid87751 00:21:20.443 Removing: /var/run/dpdk/spdk_pid87786 00:21:20.443 Removing: /var/run/dpdk/spdk_pid88036 00:21:20.443 Removing: /var/run/dpdk/spdk_pid88066 00:21:20.443 Removing: /var/run/dpdk/spdk_pid88101 00:21:20.443 Clean 00:21:20.701 killing process with pid 59809 00:21:20.701 killing process with pid 59814 00:21:20.701 15:11:51 -- common/autotest_common.sh@1446 -- # return 0 00:21:20.701 15:11:51 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:21:20.701 15:11:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:20.701 15:11:51 -- common/autotest_common.sh@10 -- # set +x 00:21:20.701 15:11:51 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:21:20.701 15:11:51 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:20.701 15:11:51 -- common/autotest_common.sh@10 -- # set +x 00:21:20.701 15:11:51 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:20.701 15:11:51 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:20.701 15:11:51 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:20.701 15:11:51 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:21:20.701 15:11:51 -- spdk/autotest.sh@383 -- # hostname 00:21:20.701 15:11:51 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:20.959 geninfo: WARNING: invalid characters removed from testname! 00:21:47.494 15:12:18 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:51.709 15:12:21 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:54.269 15:12:24 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:56.799 15:12:27 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:59.330 15:12:29 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:01.861 15:12:32 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:05.149 15:12:35 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:05.149 15:12:35 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:22:05.149 15:12:35 -- common/autotest_common.sh@1690 -- $ lcov --version 00:22:05.149 15:12:35 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:22:05.149 15:12:35 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:22:05.149 15:12:35 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:22:05.149 15:12:35 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:22:05.149 15:12:35 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:22:05.149 15:12:35 -- scripts/common.sh@335 -- $ IFS=.-: 00:22:05.149 15:12:35 -- scripts/common.sh@335 -- $ read -ra ver1 00:22:05.149 15:12:35 -- scripts/common.sh@336 -- $ IFS=.-: 00:22:05.149 15:12:35 -- scripts/common.sh@336 -- $ read -ra ver2 00:22:05.149 15:12:35 -- scripts/common.sh@337 -- $ local 'op=<' 00:22:05.149 15:12:35 -- scripts/common.sh@339 -- $ ver1_l=2 00:22:05.149 15:12:35 -- scripts/common.sh@340 -- $ ver2_l=1 00:22:05.149 15:12:35 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:22:05.149 15:12:35 -- scripts/common.sh@343 -- $ case "$op" in 00:22:05.149 15:12:35 -- scripts/common.sh@344 -- $ : 1 00:22:05.149 15:12:35 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:22:05.149 15:12:35 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:05.149 15:12:35 -- scripts/common.sh@364 -- $ decimal 1 00:22:05.149 15:12:35 -- scripts/common.sh@352 -- $ local d=1 00:22:05.149 15:12:35 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:22:05.149 15:12:35 -- scripts/common.sh@354 -- $ echo 1 00:22:05.149 15:12:35 -- scripts/common.sh@364 -- $ ver1[v]=1 00:22:05.149 15:12:35 -- scripts/common.sh@365 -- $ decimal 2 00:22:05.149 15:12:35 -- scripts/common.sh@352 -- $ local d=2 00:22:05.149 15:12:35 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:22:05.149 15:12:35 -- scripts/common.sh@354 -- $ echo 2 00:22:05.149 15:12:35 -- scripts/common.sh@365 -- $ ver2[v]=2 00:22:05.149 15:12:35 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:22:05.149 15:12:35 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:22:05.149 15:12:35 -- scripts/common.sh@367 -- $ return 0 00:22:05.149 15:12:35 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:05.149 15:12:35 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:22:05.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.149 --rc genhtml_branch_coverage=1 00:22:05.149 --rc genhtml_function_coverage=1 00:22:05.149 --rc genhtml_legend=1 00:22:05.149 --rc geninfo_all_blocks=1 00:22:05.149 --rc geninfo_unexecuted_blocks=1 00:22:05.149 00:22:05.149 ' 00:22:05.149 15:12:35 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:22:05.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.149 --rc genhtml_branch_coverage=1 00:22:05.149 --rc genhtml_function_coverage=1 00:22:05.149 --rc genhtml_legend=1 00:22:05.149 --rc geninfo_all_blocks=1 00:22:05.149 --rc geninfo_unexecuted_blocks=1 00:22:05.149 00:22:05.149 ' 00:22:05.149 15:12:35 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:22:05.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.149 --rc genhtml_branch_coverage=1 00:22:05.149 --rc genhtml_function_coverage=1 00:22:05.149 --rc genhtml_legend=1 00:22:05.149 --rc geninfo_all_blocks=1 00:22:05.149 --rc geninfo_unexecuted_blocks=1 00:22:05.149 00:22:05.149 ' 00:22:05.149 15:12:35 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:22:05.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:05.149 --rc genhtml_branch_coverage=1 00:22:05.149 --rc genhtml_function_coverage=1 00:22:05.149 --rc genhtml_legend=1 00:22:05.149 --rc geninfo_all_blocks=1 00:22:05.149 --rc geninfo_unexecuted_blocks=1 00:22:05.149 00:22:05.149 ' 00:22:05.149 15:12:35 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:05.149 15:12:35 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:05.149 15:12:35 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.149 15:12:35 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.149 15:12:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.149 15:12:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.149 15:12:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.149 15:12:35 -- paths/export.sh@5 -- $ export PATH 00:22:05.149 15:12:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.149 15:12:35 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:05.149 15:12:35 -- common/autobuild_common.sh@440 -- $ date +%s 00:22:05.149 15:12:35 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732115555.XXXXXX 00:22:05.149 15:12:35 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732115555.SNAr02 00:22:05.149 15:12:35 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:22:05.149 15:12:35 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:22:05.149 15:12:35 -- common/autobuild_common.sh@447 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:22:05.149 15:12:35 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:22:05.149 15:12:35 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:05.150 15:12:35 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:05.150 15:12:35 -- common/autobuild_common.sh@456 -- $ get_config_params 00:22:05.150 15:12:35 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:22:05.150 15:12:35 -- common/autotest_common.sh@10 -- $ set +x 00:22:05.150 15:12:35 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:22:05.150 15:12:35 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:22:05.150 15:12:35 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:22:05.150 15:12:35 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:22:05.150 15:12:35 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:22:05.150 15:12:35 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:22:05.150 15:12:35 -- spdk/autopackage.sh@19 -- $ timing_finish 00:22:05.150 15:12:35 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:05.150 15:12:35 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:22:05.150 15:12:35 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:05.150 15:12:35 -- spdk/autopackage.sh@20 -- $ exit 0 00:22:05.150 + [[ -n 5972 ]] 00:22:05.150 + sudo kill 5972 00:22:05.159 [Pipeline] } 00:22:05.175 [Pipeline] // timeout 00:22:05.181 [Pipeline] } 00:22:05.196 [Pipeline] // stage 00:22:05.201 [Pipeline] } 00:22:05.216 [Pipeline] // catchError 00:22:05.226 [Pipeline] stage 00:22:05.228 [Pipeline] { (Stop VM) 00:22:05.241 [Pipeline] sh 00:22:05.522 + vagrant halt 00:22:09.713 ==> default: Halting domain... 00:22:16.371 [Pipeline] sh 00:22:16.651 + vagrant destroy -f 00:22:20.850 ==> default: Removing domain... 00:22:20.862 [Pipeline] sh 00:22:21.141 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:22:21.150 [Pipeline] } 00:22:21.165 [Pipeline] // stage 00:22:21.171 [Pipeline] } 00:22:21.185 [Pipeline] // dir 00:22:21.190 [Pipeline] } 00:22:21.205 [Pipeline] // wrap 00:22:21.211 [Pipeline] } 00:22:21.222 [Pipeline] // catchError 00:22:21.231 [Pipeline] stage 00:22:21.233 [Pipeline] { (Epilogue) 00:22:21.247 [Pipeline] sh 00:22:21.529 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:28.184 [Pipeline] catchError 00:22:28.185 [Pipeline] { 00:22:28.198 [Pipeline] sh 00:22:28.477 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:28.734 Artifacts sizes are good 00:22:28.742 [Pipeline] } 00:22:28.756 [Pipeline] // catchError 00:22:28.766 [Pipeline] archiveArtifacts 00:22:28.773 Archiving artifacts 00:22:28.897 [Pipeline] cleanWs 00:22:28.908 [WS-CLEANUP] Deleting project workspace... 00:22:28.908 [WS-CLEANUP] Deferred wipeout is used... 00:22:28.914 [WS-CLEANUP] done 00:22:28.916 [Pipeline] } 00:22:28.932 [Pipeline] // stage 00:22:28.937 [Pipeline] } 00:22:28.952 [Pipeline] // node 00:22:28.959 [Pipeline] End of Pipeline 00:22:29.020 Finished: SUCCESS